Hacker News new | comments | ask | show | jobs | submit login




"Table based data speculation" - so a lookup table?? Would be interested in knowing how innovative the patent is from someone with more knowledge in this field.


Not really, although the technique employs a look-up table. The basic idea is that they found that many instruction sequences that cause a mis-speculation once tend to cause it again in the future. This is done in the context of ILP -- mis-speculation means that the processor thought an instruction doesn't need data from a previous one and it executed it in advance, but then it turned out that the result did depend on a previous one, so the first result was retired and the instruction was executed again.

In order to avoid this from happening too often, they added a(nother) feedback loop in the whole process: if a sequence of instructions seems to cause a mis-speculation often enough, it's added to a table, so that when it's encountered again, the control unit doesn't speculatively execute an instruction whose result is most likely going to have to be retired anyway.

I don't know if there's prior art to this, but the basic idea is fairly straightforward. I wouldn't be surprised to learn that IBM or DEC engineers knew about this prior to 1996.


The first (and often only) thing to read is the claims. Claim 1 for example:

1. In a processor capable of executing program instructions in an execution order differing from their program order, the processor further having a data speculation circuit for detecting data dependence between instructions and detecting a mis-speculation where a data consuming instruction dependent for its data on a data producing instruction of earlier program order, is in fact executed before the data producing instruction, a data speculation decision circuit comprising:

a) a predictor receiving a mis-speculation indication from the data speculation circuit to produce a prediction associated with the particular data consuming instruction and based on the mis-speculation indication; and

b) a prediction threshold detector preventing data speculation for instructions having a prediction within a predetermined range.


Bear in mind the patent was filed in 1996 when considering the level of innovation.


Having a look up table for branch prediction? If something happens a lot it might happen in the future. That doesn't even sound that novel.


There were branch-predicting mainframes back in the 80s, though it really came to the fore with the advent of superscalar microprocessors in the early 90s (MIPS R8000 and DEC Alpha 21064), what is the paper's innovations above these?


It's pretty clear from even a cursory look – the patent covers a second level of branch prediction, where the processor determines whether or not it should bother speculatively executing an instruction based on the previous outcome of that speculative execution.

Whether or not there was a prior implementation of this, I don't know – but it's also obviously more than a simple saturating counter branch predictor or whatever.


http://www.theonion.com/blogpost/fuck-everything-were-doing-...

So inductive patent expansion. Take previous innovation, add 1, profit.

What about L0 and L4 caches? Or Renaming sets of renamed registers? The problem as others have outlined, that patents are not concrete enough. Simply describing a problem is enough to be granted a patent. The value of the description of most patents is zero, which afaik this opposite of their intended effect as a record and transfer of technology.


The blithe dismissal of the value incremental improvement doesn't make you look smarter.


Thanks for the bespoke advice.


None of those processors had anything like this.

The innovation of the UWM paper was the MDPT and DDST, then due to practical manufacture reasons merging them, and then studying the trade-off with a simulator to arrive at a very efficient system.

For comparison, here is the IBM patent for the bigger more expensive approach used in Power4:

http://www.google.com/patents/US5615350


This is not even related to branch prediction, it is a scheme for controlling load speculation. This means executing a load out of order, even if there are stores with unresolved addresses ahead of it. This paper isn't proposing load speculation, it's proposing a predictor when for when it should be done.


It seems to me that the ILP is able to begin execution of branches before the branch is known to be true.

    def func():
        if predicate():
            task1()
        else:
            task2()
In the above program, the processor would be able to speculatively start execution of task1 before the predicate function returned a value.

(obviously the processor does this on a much lower level than this code)


From what I read it is mostly branch predicition with data pre-loading on the CPU Level.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: