Hacker News new | past | comments | ask | show | jobs | submit login
Shunting-yard algorithm (wikipedia.org)
75 points by lkurusa on Feb 18, 2019 | hide | past | favorite | 26 comments



My blog links to some runnable Python code for this algorithm, with tests:

Code for the Shunting Yard Algorithm http://www.oilshell.org/blog/2017/04/22.html

https://github.com/bourguet/operator_precedence_parsing

It seems like there are 2 common algorithms for expressions: the shunting yard algorithm and Pratt parsing [1]. As best as I can tell, which one you use is a coin toss. I haven't been able to figure out any real difference.

They're both nicer than grammar-based approaches when you have many levels of precedence, like C.

But it doesn't seem like there is any strict relationship between the two algorithms, which is interesting. I guess there are other places where that happens, like there being at least two unrelated algorithms for topological sort.

Trivia: The original C compilers used the shunting yard algorithm. IIRC the source made reference to Dijikstra.

[1] http://www.oilshell.org/blog/2017/03/31.html


I haven't been able to figure out any real difference.

The biggest difference is that Pratt/precedence climbing does not require manipulating an extra stack, but instead uses the normal procedure stack in the same manner as recursive descent. This leads to somewhat simpler code. I've seen far more uses of recursive descent/Pratt than shunting-yard.


I understand that, but I think it's mostly a matter of opinion ("somewhat simpler"). It seems like the code comes out the same length either way, and they're both linear time algorithms.

I can't imagine any case where a user will care -- it's an internal detail. Whereas there are places where a user would notice if you used a CFG vs. a PEG.

A long time ago, people might have been concerned about the recursive calls of Pratt parsing, but it seems like a non-issue now.

I do think Pratt parsing is easier to understand, but that might be because I encountered it first.


I first tried shunting yard without knowing what it was by mechanically translating the stack usage from a Pratt parser, so I agree.

The reason was that I'd extended it to handle a lot more unusual constructs, and at that point the recursive version got much harder to read.

It was part of a (ill advised, probably) attempt at using it to parse an entire C-like language, control structures at all. It worked, but it was painful to figure out all the precedence rules, and e.g. if I remember correct modelling if () {} else {} as operator "if" with operands (condition if-block else-expression).

The changes to allow weird prefix operators like that with multiple operands were otherwise reasonably straightforward, though, it was the precedence table that got really hard to reason about.

[I wish I still had the source; it was a fun, though very much dated, project, that allowed arbitrary inline M68k assembler in expressions (e.g. you could write "if D0.W == $1000 { }" to compare the lower 16-bit word of data register 0 to hex 1000), with "mentioning" the registers making the register allocator treat them as off limits for that scope]


I don't think Shunting Yard and Pratt are equivalent in that way -- i.e. replacing procedure calls with an explicit stack.

When I wrote "Pratt Parsing and Precedence Climbing Are the Same Algorithm", multiple people replied and suggested that the Shunting Yard algorithm is also the same, with the equivalence you suggest.

In particular the author of the repo Jean-Marc Bourguet also thought that. Then he implemented it and realized it wasn't true.

See this section of the README:

https://github.com/bourguet/operator_precedence_parsing

I had the mistaken impression that they were "just" replacing an explicit stack with the call stack of the implementation language but that was not clear enough from Andy's code. It was not clear because that's not the case. Compare Pratt's algorithm with recursive_operator_precedence.py, which is a recursive implementation of operator precedence, to see the difference. Pratt and operator climbing are really top down techniques: they call semantic actions as soon as possible, they don't delay the call until the whole handle is seen as operator precedence does. My current characterization is that they are to LL parsing and recursive descent what operator precedence is to LR and shift-reduce

I think we had a discussion about an analogy LL vs. LR. Those are obviously not equivalent algorithms -- despite both being linear time, they recognize different languages.

I think in the Shunting Yard algorithm, when you parse 1 + 2 / 3, the 2 / 3 tree is made first. I guess that is somewhat true of Pratt parsing too for natural reasons of tree building. You build the tree when you bubble out of the recursion.

But it DECIDES that it has "led" for + before it decides it has "led" for /.

So in summary, Shunting Yard "decides" 2 / 3 first, where as Pratt parsing "decides" 1+... first. That's what I think anyway. I think you could probably make this rigorous, but I'm not sure.

I think of Pratt Parsing as LL(2) parsing for an operator grammar, although that's not technically correct because the precedence table is an extra piece of info outside the "grammar". You either lookahead 1 or 2 tokens, for null and led respectively.

So basically I think you could translate Pratt parsing to use an explicit stack -- but then you'd still be using Pratt parsing. The Shunting Yard algorithm is something else.


I believe you're right for the recursive version (though a recursive version of shunting yard makes very little sense), but this distinction largely evaporates with an explicit stack.

E.g. Pratt is basically writing a generic recursive descent component ("parse left; parse operand; parse right") and then passing down a precedence where you terminate the subtree parse. That's a large part of the appeal - it slots seamlessly into a recursive descent parser without looking "alien", and it was what I started with. It's nice because it is top down.

The naive way of making the stack explicit here is to "push" a fake stack frame when you would have recursed, and pop when you would have exited. But to do all tree of the parse_left/parse_operand/parse_right triple this way you then need to push additional frames to run those steps one after each other, and you end up with something really ugly. But if you trace the processing of those additional frames, what you will see then is quickly that you can drop them by just splitting the stack into operands and operators. Then you have shunting yard.

So you're sort of right that a naive translation of Pratt that retains the top-down nature of recursive Pratt is distinct from shunting yard, but once you simplify it the before/after distinction evaporates, as there's nothing in between to make the decision before/after.

I think the distinction is large enough that it makes sense to treat them as different algorithms, but the transformation is still mostly a mechanical exercise. Each step is fairly straightforward.


This algorithm is great.

I was working on a new metric alert evaluation system and built an expression parser for things like:

    system.disk.free{*} / 1024 by {host}
The naive approach to parsing will result in the wrong order of operations, since they'll just be done in the order they appear:

    x + y * z   =>   (x + y) * z
Rather than:

    x + (y * z)
As it should be. The shunting yard algorithm will rearrange the expressions.

After implementing it I noticed my results were different than the reference system... and that's when I discovered we did arithmetic wrong in the main app and had for years.

So I had to hard code a toggle for whether to do math properly :(. It's a sort of Hippocratic oath when it comes to these things... first do no harm, and even if it was wrong, people were relying on the existing functionality, and changing it would likely result in sudden alerts for folks.

In the end we did fix it in the main app, but you always feel kind of dirty writing code like that.


Just the other day I was thinking about how all this infix business is needlessly complicated and leads to subtle bugs and hard to understand code. Like every time I encounter an uncommon operator in some language I have to lookup its precedence. So much time wasted trying to satisfy this silly familiarity with math notation. Only imagine how much easier things could be if all infix operators were, for example, left associative and had the same precedence. No more parsing bugs, no implicit orders and behaviors to remember, consistent order, even more natural and familiar than the math notation allowing to focus on things that matter and forget about dealing with precedence and associativity.


If you're going to put all operators at the same level of precedence, you've accepted lots of parentheses anyway; so why not just go all the way and require parentheses everywhere, meaning that no associativity need be specified either?


I've always thought it would be cool to have a language where functions were tagged as distributive, associative, symmetric/commutative, and monotonic and the optimizer (and editor!) used these properties to determine optimizations or simplifications. Note that this would be for all expressions, not just arithmetical ones.

I vaguely recall hearing about some fancy C++ template based techniques for accomplishing this, actually, but haven't looked into it.


Swift has operator precedence levels, and you are free to use them for your own operators: https://developer.apple.com/documentation/swift/swift_standa...


Mathematica/the Wolfram language does that. e.g the Flat attribute denotes associativity, etc. https://reference.wolfram.com/language/ref/Flat.html


I also wonder that. However how much "dislike" will cause to force to use parents for math code? For me, I could live with that, but I'm not a math-heavy coder...

How much other stuff, apart of math, could be impacted?


because essentially, Shunting yard algorithm is just a LR(0) parser, it awaits sufficient information first in a post-order manner (shifting), then each time you push to the final queue if a production is found (reducing)


I implemented this as a class assignment in Univac 1100 assembly language in the late 70s. I always thought it was a cool algorithm. I had an HP scientific calculator at the time and was a fan of RPN.


It's a neat algorithm :) When learning Go I made an implementation of Shunting-Yard at some point: https://github.com/DylanMeeus/GoPlay/blob/master/ExpressionP...

As part of something else I was trying. As I was just learning Go at that point, the code is not that clean :D But AFAIK it did work. My memory is a bit hazy on that :)


This is one of those algorithms that everyone should understand or memorize. I had variations of this algorithm in the coding interviews or coding challenges of 6 companies (!) my last cycle.


> This is one of those algorithms that everyone should understand or memorize.

Understand... perhaps; it is a fascinating algorithm. It's also not complicated at all.

Memorize? Perhaps for code challenges, interviews and special occasions like that... but I haven't found it to be useful enough to commit to memory because in order to parse truly arbitrary expressions, you need to remember much more than just the the shunting yard rules.

I say this as someone who authored a mathematical modeling DSL in grad school and had implement this algorithm to correctly parse arbitrarily complicated math expressions. I was deep in the weeds of this. But I don't think I would have been able to reproduce it from memory even during that time. (also, I had to implement a more general version of the algorithm that dealt with function composition, multi-argument functions, unary operators, special keywords like sum/product over sets, etc.)

Of course nowadays I'm so lazy that I just do "import asteval" in Python. The reason is that once you get beyond the simple operators, arbitrary math expression parsing can be quite hard to get right, and I prefer using something that's heavily tested with no unhandled corner cases.


A good way to rederive table-driven expression parsing on the spot is to start with recursive descent and notice how it makes a recursive call for every level of precedence, even the levels where "nothing happens" because there's no operator of that precedence level at this place in the input. Use the table to call the "right place in the grammar" directly. This works out to be the precedence-climbing algorithm.

I guess you'd get to the shunting yard from there by turning the remaining recursion into an explicit stack, but I haven't worked that out to check.


That sounds about right. I usually use this simple snippet as my starting point [1]. As you can see, to implement a working stack evaluator one has to do a little bit more than just handle infix operators.

[1] PyParsing Simple Calc - see the evaluateStack() function. https://github.com/pyparsing/pyparsing/blob/master/examples/...


Hah, a few years ago I learned Go by building a simple programming language (super basic with Lua-like syntax) and definitely used this article to help me. I made a slight modification so it could support functions with multiple arguments. Fun times.

https://github.com/bhoeting/blast/blob/master/parser.go#L82


I was hoping for an algorithm that solves railway shunting problems. Anyone know of something that does that?

https://en.wikipedia.org/wiki/Train_shunting_puzzle


This is really interesting. Found: https://www.researchgate.net/publication/225576076_Shunting_...

I wonder what real railway yards do.


train shunting problems seem like a variation of tower of hanoi https://en.wikipedia.org/wiki/Tower_of_Hanoi


Encode it into a SAT instance and use a solver.


The original paper is a good read to get some insight into the thought process: https://www.cs.utexas.edu/~EWD/MCReps/MR35.PDF




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: