Sorting was broken in java and nobody noticed for a long time: http://envisage-project.eu/wp-content/uploads/2015/02/sortin...
The same was true for java's binary search:
So I am not sure I will ever trust an ML algorithm trained on inputs/outputs only (which is what I think "neural program induction" means). The above bugs are only hit because the programmers who wrote it didn't think about overflow. What is this "neural programmer" assuming about their inputs? We'll never know.
OTOH the standard library sort can be improved if you know the distribution of the numbers you're sorting (e.g., small numbers can bucket sort, small lists can use a sorting network, etc). If this thing can efficiently almost-correctly-sort because it's better at these types of pattern matching, we can just run a final pass of insertion sort to make it useful!
I haven't read this paper, but that strategy was exactly the strategy behind "The Case for Learned Index Structures". The idea was you could train a model to predict the location of a row based on a primary key. After computing the model, you could then go through every row and compute a bound on how far off the model is from the actual row. To perform a lookup, you start off at the model's guess, then look around based on the bound on the model for a row with the given primary key.
Depends on what kinds of mistakes the almost-correct approach makes. If it just puts elements in the wrong order, your suggestion works. But if it makes mistakes like duplicating elements, or dropping them, or completely making up new entries, no post-processing will help.
> OTOH the standard library sort can be improved if you know the distribution of the numbers you're sorting (e.g., small numbers can bucket sort, small lists can use a sorting network, etc).
Even better: the standard library sort usually assumes only comparison. For special cases like sorting on integer keys you can conceivably do better.
The algorithm modifies the original list only by swapping entries. As such, it's guaranteed to result in a permutation.
But it also limits their system to in-place algorithms only.
You are right in some sense, but you'd still have trouble implementing something like bucket sort in your setting, if all you have are comparisons.
I feel that if we ever want to build an intelligent problem solver machine, we need some facility for abstract reasoning and work like this papet is a major contributor towards that goal.
(if its reproducible and valid ofc...)
It’s doing TDD!
Its just that algorithms like the Prolog’s WAM or the reet used in rules are replaced with neural nets and the like.
Rules use the same When Then format acting on a Given fact base. Prologs Horn Clauses can easily be translated into that form.
BDD writes all tests in the format Given When Then.
It’s already here.
According to the article, they couldn't find a case where it wasn't correct.
Nothing saddens me more than this trend of the modern web where everything works semi-probabilistically (even if it's likely for "good" technical reasons, such as, "we wrote our server backend in Ruby which is slow as molasses so now we need 230 CDN and 800 databases instances around the whole world and transformed our simple centralized problem into an horrendous decentralized one).
The central reason for me to use computers is that they are (or at least were) deterministic to a much higher degree that normal life, and so many things in the 10 last years becoming much more non-deterministic in particular on social websites is something that frustrates me every single day as it just makes the whole experience and process of using computers & the web very unreliable compared to what it used to be.
(The article was very light on details though, and it was probably just one team that consolidated to MongoDB, not the accounts itself... one hopes.)
This trend seems to be everwhere
Stackoverflow was just using 4 db servers in 2016 (https://nickcraver.com/blog/2016/02/17/stack-overflow-the-ar...)
That is why I am not a huge fan of proving code in the first place.
If you have a really complicated proof of formal correctness, then who guarantees you, there is no misstake in the proof itself? At least this is what I have seen in university, lots of complicated stuff on the whiteboard, with a result. And then someone figured out, it was wrong.
So I rather do lots of testing, for all known test cases.
How is this any different? You can still make mistakes in your test code, test data, omit cases that would fail, etc. I think I'd be less confident in a set of unit tests than I would with an automated proof checker.
Complicated proofs I do not understand, without much effort.
And I surely know that you can make mistakes with test cases as well. So I surely do not claim my way to be superior. But it works way better for me.
Especially the test cases needed to trigger some deep bug.
How do we deal with problematic humans?
Retraining or replacement.
Both humans and ML algorithms are flexible. That is the point of “learning”.
Neither humans nor ML make zero errors.
Ceteris paribus, if an ML algorithm makes fewer errors at a task which with low error tolerance - you would use the algorithm instead of the human, no?
Perfect is the enemy of good enough.
I suppose it depends on your end goals.
With black box algorithms, we throw in some new training data and just hope it's enough.
> With black box algorithms, we throw in some new training data and just hope it's enough.
A small wording change and they're equivalent again.
To some extent human beings are also a black box - with some very peculiar failure conditions and side-channel weaknesses.
However, it is true that validating it for a very large number of inputs and outputs has value. This is not uncommon even in mathematics. For example, Goldbach's Conjecture (https://www.wikiwand.com/en/Goldbach%27s_conjecture) has not been proven but has been shown to hold for all integers less than 4 × 10^18
There’s some cool work on using neural program induction in compilers to try generate faster implementations based off the ‘oracle’ program that’s being compiled.
Can u provide more details?
I am familiar with garden-variety ML-optimization-of-combinatorical-space in compilers.
How would u even "measure" the expected performance unless these are gradual greedy changes
First sort using the ML algorithm. Then sort the almost sorted result with a traditional algorithm optimized for almost sorted results.
In fact, I think my own brain does the two phase approach to many problems.
I used to ask "implement binary search" as an interview question and gave up on using it because not a single candidate could do it correctly in 45 minutes. This was at a FAANG.
If they didn't, than I believe you did a poor job of guiding them. I refuse to believe that this is some esoteric, tough question. If you can solve dynamic programming problems, you can easily solve this one.
(high + low) / 2
Low + (high - low) / 2
What else about this problem is a "sharp edge"?
Candidate: bsearch(key, base, num, size, compare_fn);
Me: "Your offer should arrive within one to two weeks. Need any help with relocation expenses?"
But yes - I agree it’s weird that many standard-libraries (looking at you, .NET!) have only a single `Sort` method oh their vector-types with the details of the implementation buried in the documentation instead of shipping with a first-class sorting-algorithms library.
If you have a source, I would like to learn more.
(And if this insertion sort won't complete in O(n), say <= 3n, then, fallback to Quicksort, so won't turn into O(n^2) just because the ML alg hit a corner case)
It runs in O(n log (number of runs)). It's quite a lot simpler than Timsort.
According to the authors, the proposed approach is notable for training not only on input/output pairs but also on program traces. In addition, reinforcement learning is used to train e.g. sorting agents on the behaviour of "teacher agents" that comprise hand-crafted implementations of well-known sorting algorithms (bubble- insertion- and quick-sort).
Pfffft, just wait until they get it writing code with its own proofs of correctness in dependent type theory. It'll be correct and twice as unreadable as incorrect human-written code!
Checking that the output is a (optionally stable) permutation of the input is the hard part. You can't do that dynamically if you have, for example, overwritten your input by sorting in place. And making a copy of your input is only going to be affordable in very specific and small instances on which you might as well just run a well-understood algorithm.
procedure QUICKSORTAGENT(input state)
2: Let i = 1, j = 2, l = 3, h = 4
3: if FunctionID = None then
4: return vh ← Function1(vl ← vl, vh ← vh)
5: else if FunctionID = 1 then . QuickSort
6: if vl < vh then
7: if prev = None then
8: return vi ← Function2(vl ← vl, vh ← vh)
9: else if prev = (vi ← Function2(vl ← vl, vh ← vh)) then
10: return AssignVar(j, i)
11: else if prev = AssignVar(j, i) then
12: return MoveVar(i, -1)
13: else if prev = MoveVar(i, -1) then
14: if vi > vl then
15: return vi ← Function1(vl ← vl, vh ← vi)
17: return MoveVar(j, +1)
18: end if
19: else if prev = (vi ← Function1(vl ← vl, vh ← vi)) t hen
20: return MoveVar(j, +1)
21: else if prev = MoveVar(j, +1) and vj < vh then
22: return vh ← Function1(vl ← vj , vh ← vh)
24: return Return(h)
25: end if
27: return Return(h)
28: end if
29: else . Function ID = 2, Partition
30: if prev = None then
31: return AssignVar(i, l)
32: else if prev = AssignVar(i, l) then
33: return AssignVar(j, l)
34: else if vj < vh then
35: if prev = Swap(i, j) then
36: return MoveVar(i, +1)
37: else if (prev = AssignVar(j, l) or prev = MoveVar(j,
+1)) and A[vj ] < A[vh] then
38: if vi 6= vj then
39: return Swap(i, j)
41: return MoveVar(i, +1)
42: end if
44: return MoveVar(j, +1)
45: end if
46: else if prev = MoveVar(j, +1) then
47: return Swap(i, h)
49: return Return(i)
50: end if
51: end if
52: end procedure
But yes, this was uncommon so line numbers were a major thing.
And indentation was optional, you could add extra spaces to the start of lines and they would be kept and would have no effect on execution, but you would be wasting precious bytes of RAM and if using one of the heavier display modes (0, 1 or 2) you would only have 8.5Kbyte to play with for all the code and run-time storage (your variables, BASIC's call stack).
In fact, as far as I understand it the paper claims that the learned model (i.e. the student agent) has learned to reproduce the behaviour of this teacher agent after training for a smaller number of steps on average than what the teacher needs to sort a list. That is what the entire claim about superior efficiency rests on (there is no example of a learned model and no asymptotic analysis of the program such a model represents).
"We found adding the previously executed action to the input state st is sufficient to handle dis-ambiguation for this quick sort implementation. Alg. 8 shows the converted quick sort scripted agent"
Essentially they generate functions that map a program state to another program state and counts those function calls and compares with e.g. the number of function calls in quick sort. They "cheat" by feeding how all elements compares the their neighbours at each step. Like, if you give the algorithm that input for free what's the point.
Notably, none of the popular sorting algorithms decide which elements to swap by looking at the whole input at each execution step. On the contrary, the decisions are typically made based on local evidence only. ... We implement this principle for the sorting task by employing a small set of k(independent of n) index variables, and include in the input the information about how A[i] compares with A[i+ 1] and A[i−1]for each index variable i.
The point, as also explained in the paper, is that even with such information and with a handful of basic operations from which to compose a program, the space of all possible programs is huge and finding a prorgam that does what it should in any reasonable amount of time is extremely difficult.
If you are familiar with the literature of program induction (which includes a lot more than neural program induction and goes back a few decades) you will notice that the hypothesis spaces for program learning are truly vast and it is impossible to make any kind of progress without some kind of "dirty trick" to reduce their size. The common approach is to impose a strong inductive bias on the representation language, so as to guide the search to find a correct program. The approach in the article is no different. And of course "inductive bias" here always means domain knowledge.
But, amicably, if you think there's any better way to do this, many people, myself included, would be very curious to hear what you have to say. In fact, if you could really innovate in this field you'd be guaranteed an illustrious research career. Learning programs from examples is hard.
The real point of what they’re saying in your italicized quote is actually that giving the net full access hinders efficiency, so they actually restrict it. Almost like the opposite of cheating.
Doing so many comparisons and storing them has a cost. Also the model can't decide if it is done so at each step the array has to be iterated to see if it is sorted by the "environment". Are they only counting function calls? I guess so. The paper is really hard to follow and the pseudocode syntax is quite madding.
If I understand the paper correctly of course I could be wrong.
If so, "our approach can learn to outperform custom-written solutions for a variety of problems", is bogus.
Oh that, yes, it's true. They're listing "average episode lengths" in tables 1-3 and those are their main support for their claim of efficiency. By "episode length" they mean instruction or function calls made during training by the student agent which they compare to the instructions/function calls by the teacher agent. So, no asymptotic analysis, just a count of concrete operations performed to solve e.g. a sorting task.
How is that "cheating"? The algorithm is still learning how to do a comparison-sort, which has direct applications (assuming it performs well).
Yes. In particular, the O(n log n) lower bound holds for comparison sorts, that is algorithms that rely on pairwise comparison only.
1. It runs slower on modern CPUs and you need neural logic units to see the speed benefits
2. Model needs the be fined tuned to the data for best performance.
While it may be better than quicksort in certain usecases it isn't going to replace it anytime soon.
1) they train a model to do sorting, it actually sorts correctly.
2) they optimize the model on efficiency and it becomes better than many custom sort functions?
If I remember correctly from school, you can basically speed up any kind of sorting by using more space (basically by using a hash table). Is the neural network just using the normal sort algo until it sees a pattern in the input and then skips the sort algo for the pattern output? Or to put it plainly, is it just a smarter (or dumber, depending on the size of the neural network) hash table?
How do you prove that with an algorithm that no human can reason about?
Basically this paper looks like they've found an algorithm which is efficient for given sets or given classes of sets. Whether that generalizes is a different problem.
It's basically a cool automatic heuristic generator.
This is something I see becoming a problem in the future for machine learning.
Table 3 is very suspicious too. If they're doing what they claim, then why is Binary Search all of a sudden significantly better than the learned agent for some problem sizes? It really feels like they're running up against some kind of inherent optimization boundary inside their neural network library...
“The instruction set together with the input representations jointly determine the class of algorithms that are learnable by the neural controller, and we see this as a fruitful avenue for future research akin to how current instruction sets shaped microprocessors.”
“The generalization or correctness we aim for is mostly about generalization to instances of arbitrary sizes.”
“[...] computing a = f(s) can be more expensive on current CPUs than executing typical computation employed in the algorithms studied here. We thus hope that this research will motivate future CPUs to have “Neural Logic Units” to implement such functions f fast and efficiently, effectively extending their instruction set, and making such approaches feasible.”
"We include videos in the supplementary material showing the execution traces of the learned algorithms compared with the teachers, and observe qualitatively different behaviors."
Anyone know where we can find these?
I'm not going to drop something into my production code that works on just the inputs you tested with. As ghj pointed out- Java's highly analyzed sort method had a bug for years and we didn't notice.
It's neat research and I won't discourage it at all, but give me some proofs of correctness.
It's a Comparison Sort, as they say on page 5 - so I think the answer should be yes.
In summary, this paper makes strong claims of generalisation and efficiency
but does not seem to provide sufficently strong evidence to back them up, e.g.
it does not perform an asymptotic analysis to support the efficiency claim, it
doesn't show actual program output to allow the evaluation of what is actually
learned and the claim about strong generalisation is based on a very vaguely
described experimental setup.
Longer version below.
Interesting paper. It describes a Reinforcement Learning method to learn
neural network models that imitate the behaviour of agents performing a)
sorting, b) binary search and c) solutions to the NP-complete knapsack
problem. The technique is notable because it learns from program traces also,
not only from input-output pairs. The technique relies on a hard-coded set of
problem-dependent a) input states and b) instruction sets. The latter are what
it sounds like, sets of instructions that form the building blocks of the
learned programs, although in an enhanced version of the technique, functions
are also added to the (less complex?) instructions. Input states basically
include all other information that may be useful to a learner, other than the
instructions and functions from which to build a program. Two different setups
are tested: a) one using the single instruction "swap" (that swaps two
elements) and where the input state includes information of the full list, and
b) one with more instructions and localised information that is reported to
generalise better and be more efficient. The latter setup is also augmented
with "functions" which are stated to be more complex than operations, though
the distinction is rather on the vague side (there is some more information in
an appendix but it's not very helpful, I'm still left wondering why an
"instruction" is qualitatively different to a "function").
The paper is poorly organised and requires long appendices to elucidate all
manner of poorly defined concepts in the main paper. This doesn't help when it
comes time to evaluate the two most striking claims in the paper, namely that
the proposed method(s?) a) show strong generalisation and b) outperform
hand-crafted sorting algorithm quick-sort in sorting lists of numbers.
The first claim is difficult to evaluate because the paper presents results of
sorting lists of variable _size_ but does not say anything clear about list
_contents_. For example, while training lists are chosen to have a random
length in [10,20] and with elements in the same range, testing lists have
their length incrased, but there is no mention of the values of the elements
in those litss and, going by what is described in appendix C, lists can have
duplicate values. In other words, it's possible that the "strong
generalisation" results refer to sorting lists of variable size with the same
unique elements as in the training lists. In that case, the claim of "strong"
genearlisation is not supported.
The second claim is difficult to evaluate for two reasons: a) there is no
example of the training output, i.e. we never get to see what progams,
exactly, the proposed approach is learning; and b) there is no attempt to
asymptotic analysis of the learned models (which would be hard without an
example thereof). Instead, the claim of efficiency seems to rest on two
presmises: a) that sorting programs of O(n log n) time complexity are in the
hypothesis space defined by the input states and instruction sets and
functions for the relevant experimental setup, i.e. the learner could learn
an O(n log n) sorting algorithm; and, b) that the number of training steps
taken by the learning agents in imitation of the teacher agents is in the
ballparck of n log n of the input. In other words, the strongest evidence for
the efficiency claim seems to be a count of instructions (or function) calls
during training. So, this very strong claim is also very poorly supported.
Functions can be recursive. So the algorithm that the agent generates is able to use divide-and-conquer.
This is not cool.
agreed, this is kind of backwards research
Their algorithm is a comparison sort, as they make clear on page 5.