Hrm. I guess the converse applies if nodes can have infinite children. That said, even if your tree is infinitely wide and deep, we're only dealing with countable children, right? Thus a complete traversal has to exist, right?
For example, each node has unique path to root, so write <n1, n2, ..., nk> where each ni is the sibling ordinal of the node at depth i in that path, i.e. it's the ni-th sibling of the n(i-1)st node. Raising each of these to the ith prime and taking a product gives each node a unique integer label. Traverse nodes in label order and voilà?
However, that all assumes we know the tree beforehand, which doesn't make sense for generic call trees. Do we just smash headfirst into Rice on this when trying to traverse in complete generality?
No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children). "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.
Breadth first search would visit every node breadth first, so given infinite time, the solution would eventually be visited.
Meanwhile, say a branch had a cycle in it, even given infinite time, a naive depth first search would be trapped there, and the solution would never be found.
Suppose you have a node with two children A and B, each of which has infinitely many children. If you performed an ordinary BFS, you could get trapped in A's children forever, before ever reaching any of B's children.
Or, suppose that a node has infinitely many children, but the first child has its own child. A BFS would get stuck going through all the first-level children and never reach the second-level child.
A BFS-like approach could work for completeness, but you'd have to put lower-level children on the same footing as newly-discovered higher-level children. E.g., by breaking up each list of children into additional nodes so that it has branching factor 2 (and possibly infinite depth).
Countable infinity does not work like that: two countable infinities are not more than one countable infinity. I think it falls into the "not even wrong" category of statements.
Yes, if you put two (or three, or countably many) countable sets together, you obtain a set that is also countable. The problem is, we want to explicitly describe a bijection between the combined set and the natural numbers, so that each element is visited at some time. Constructing such a bijection between the natural numbers and a countably-infinite tree is perfectly possible, but it's less trivial than just DFS or BFS.
If we're throwing around Wikipedia articles, I'd suggest a look at https://en.wikipedia.org/wiki/Order_type. Even if your set is countable, it's possible to iterate through its elements so that some are never reached, not after any length of time.
For instance, suppose I say, "I'm going to search through all positive odd numbers in order, then I'm going to search through all positive even numbers in order." (This has order type ω⋅2.) Then I'll never ever reach the number 2, since I'll be counting through odd numbers forever.
That's why it's important to order the elements in your search strategy so that each one is reached in a finite time. (This corresponds to having order type ω, the order type of the natural numbers.)
> "Completeness" is not about finishing in finite time, it also applies to completing in infinite time.
Can you point to a book or article where the definition of completeness allows infinite time? Every time I have encountered it, it is defined as finding a solution if there is one in finite time.
> No breadth first search is still complete given an infinite branching factor (i.e. a node with infinite children).
In my understanding, DFS is complete for finite depth tree and BFS is complete for finite branching trees, but neither is complete for infinitely branching infinitely deep trees.
You would need an algorithm that iteratively deepens while exploring more children to be complete for the infinite x infinite trees. This is possible, but it is a little tricky to explain.
For a proof that BFS is not complete if it must find any particular node in finite time: Imagine there is a tree starting with node A that has children B_n for all n and each B_n has a single child C_n. BFS searching for C_1 would have to explore all of B_n before it could find it so it would take infinite time before BFS would find C_1.
In practice, though, with BFS you'd run out of memory instead of never finding a solution.
Also, there shouldn't be many situations where you'd be able to produce infinite branches in a prolog program. Recursions must have a base case, just like in any other language.
This has to do with the ordering of search: searching a proof tree (an SLD tree, in SLD-Resolution) with DFS, as in Prolog, can get stuck when there are cycles in the tree. That's especially the case with left-recursion. The article gives an example of a left-recursive program that loops if you execute it with Prolog, but note that it doesn't loop if you change the order of the clauses.
This version of the program, taken from the article, loops (I mean it enters an infinite recursion):
last([E],E).
last([_H|T],E) :- last(T,E).
Ls = [3] ;
Ls = [_,3] ;
Ls = [_,_,3] ;
Ls = [_,_,_,3] ;
Ls = [_,_,_,_,3] ;
Ls = [_,_,_,_,_,3] .
% And so on forever
To save you some squinting, that's the same program with the base-case moved before the inductive case, so that execution "hits" the base case when it can terminate. That's half of what the article is kvetching about: that in Prolog, you have to take into account the execution strategy of logic programs and can't just reason about the logical consequences of a program, you also have to think of the imperative meaning of the program's structure. It's an old complain about Prolog, as old as Prolog itself.
I think what you mean is that he adds an argument that counts the times a goal is resolved with, thus limiting the depth of resolution? That works, but you need to give a magic number as a resolution depth limit, and if the number is too small then your program fails to find a proof that it normally should be able to find. It's not a perfect solution.
Yes, well not so much a constant value. He added an unbound variable and it was enough to alter the search. Indeed it's still more or a trick, but it got me interested if there were other more fundamental ideas beyond that.
That sounds like iterative deepening without a lower bound then. I guess that's possible. Maybe if you had a link to Markus' page I could have a look.
There are techniques to constraint the search space for _programs_ rather than proofs, that I know from Inductive Logic Programming, like Bottom Clause construction in Inverse Entailment, or the total ordering of the Herbrand Base in Meta-Interpretive Learning (ILP). It would be interesting to consider applying them to constraint the space of proofs in ordinary logic progamming.
Refs for the above techniques are here but they're a bit difficult to read if you don't have a good background in ILP:
> "Maybe if you had a link to Markus' page I could have a look."
e.g. here: https://www.metalevel.at/tist/ solving the Water Jugs problem (search on the page for "We use iterative deepening to find a shortest solution") finding a list of moves emptying and filling jugs, and using `length(Ms, _)` to find shorter list of moves first.
or here: https://www.metalevel.at/prolog/puzzles under "Wolf and Goat" he writes "You can use Prolog's built-in search strategy to search for a sequence of admissible state transitions that let you reach the desired target state. Use iterative deepening to find a shortest solution. In Prolog, you can easily obtain iterative deepening via length/2, which creates lists of increasing length on backtracking."
There are. Tabling (available in most mature implementations) helps when recalculation of the same states is a problem. Meanwhile, custom search strategy is always an option to implement directly in Prolog. You'll see this in many Advent of Code solutions in Prolog when it is applied to path finding puzzles, in which depth first search is rarely a workable solution.
It ends like this, largely cancelling out the rest of the article:
>In the second iteration of the circle, all of his notes are completely useless, and all of his initial attempts to teach anything fail, because these are different kids with different aptitudes and different interests. Zvonkin, raised in a communist society and a believer in the absolute malleability of human nature, is fairly bowled over by this, especially by how young all these differences are manifesting. Reading between the lines, it sounds like he got quite lucky with his first set of children, and that the second group were much more challenging to teach.8 The most eloquent testimony to this is that after about a year he gives up, and the journal ends abruptly.
Adding covariates to the post analysis can reduce variance. One instance of this is CUPED by there are lots of covariates which are easier to add (eg request type, response latency, day of week, user info, etc).
This looks like it might be interesting or might not, and I wish it said more in the article itself about why it's cool rather than listing technicalities and types of machines. Do you have a favorite pitch in those dozens of references at the end?
It doesn't say where the 152 astrologers were sourced, does it? Or how they were qualified? Should astrology be a field with very many impostors, these results would not be unusual.
Generally speaking I suspect it will be difficult to falsify astrology, but more importantly the onus is on the astrologer to prove their ability rather than for others to disprove it.
I think the margin between OpenAI and the next few best competitors is already fairly slim, but as OP mentioned, the margin between the median competitor and the best is also decreasing.
It suggests that playing catch-up progresses quicker than pushing the state of the art.
I don't see how any of that pops a bubble but I think it could transformative for the leading companies (OpenAI, Anthropic, etc) which may become more rank and file service providers.
Well, their leetcode rating says something about their ability to solve novel abstract problems using computer science tooling. All other things being equal, the candidate better at that is the better candidate.
Meanwhile, it is not a complete assessment. You can do all the other things you've stated and leetcode will still add value.
It depends on how its used. For example, if you have to repeat your performance in a live interview, you'll struggle to do so. Similarly, if the employer uses a confirmatory set which the LLMs get wrong (e.g. adventofcode does that), you will struggle.
The point of leetcode et al is to measure how able you are to solve problems which you have not seen before. It is not a comprehensive evaluation but a useful part of one.
Breadth first search is complete even if the branches are infinitely deep. In the sense that, if there is a solution it will find it eventually.
reply