Hacker News new | comments | ask | show | jobs | submit login
Apple facing huge chip patent bill after losing case (bbc.com)
156 points by jnord on Oct 14, 2015 | hide | past | web | favorite | 141 comments

Looks like the "idea" of the patent in the description is to use a predictor to predict when a STORE and LOAD alias and not speculate the LOAD and any instruction depending on the load (although the claims generalize this to any non-static dependency).

As it generally happens in software/hardware patents, the claimed solution seems quite obvious whenever one wants to solve that particular problem, and the hard part is the "execution", i.e. implementing it efficiently and figuring out whether the tradeoffs are worth it.

So assigning patents to things like this seems really dumb.

I strongly agree with you. To patent an idea with no definition of execution seems too broad to me.

An easy example is to patent an "opening mechanism that requires force" (door). Yes, that's an idea but can have different ways to execute, sliding door, regular opening door, suicide doors.

Now a patent on the hinge that performs the operation seems more concrete. The exact execution of said idea.

What is your basis for the claim they've not evaluated the tradeoffs or tried to design an efficient implementation?

The patent seems to be based on this paper: http://dl.acm.org/citation.cfm?id=264189. It has an extensive experimental evaluation. Another thing to keep in mind is that Sohi, Vijaykumar and Moshovos are all respected computer architecture researchers, and this paper was published in ISCA, which is the best and most competitive forum for computer architecture research.

That might be true, but that doesn't mean it's something that should be patented.

Would you like to expand on why you think it shouldn't be patented?

Architectural optimizations are often dismissed by outsiders as being "obvious", but what they don't see are all the other "obvious" ideas that don't work for various subtle reasons. I worry that if you make these techniques unpatentable, they'd become trade secrets and nobody would publish them, and we'd be worse off as a community.

I hear that "nobody would publish" argument bandied about... but is there any evidence to back it up?

These were University researchers. It's not like they're in it for the crazy-$$. In my experience, they'd publish anyway... for all the right reasons: furthering humanity, intellectual curiosity, academic prestige, etc.

University researchers like John Hennessy (current President of Stanford), who made a bunch of money founding MIPS and Atheros (both of which were valued heavily for their patent portfolios)?

The order matters. He was a professor first (did the publishing) and then founded the companies second. So there are really two questions:

(1) When he was a professor, would he have still published the work even if he couldn't patent and profit from it later? My assertion: Yes, he probably would've still published.

(2) Would he have founded MIPS (and made it successful) w/o the patents? I don't know, but that was beyond the scope of my claims. ;) Or perhaps: If he hadn't patented it, but just published... then would other companies have picked up his technology (for free) and integrated it into their products, resulting in a net win for society anyway?

Because of the 1-year publication bar, you basically have to prepare a patent application concurrently with publication. So in practice publishing and patenting are simultaneous.

You also have to consider the prospective impact of the rule. Would smart ambitious people go into the PhD/academia track if they couldn't parlay research into a business opportunity? Many wouldn't.

Of course patents aren't a necessary condition for turning research into a business. But for hard R&D type businesses, they're a pretty important criteria to getting investment.

Anecdotally, having worked at AMD and Intel, both companies have lots of internal knowledge about what it takes to produce a high-performance x86 processor. Some of it comes out as patents, but if you outlawed that, I guarantee you none of it would be known to the public.

Given the incredibly long life of patents relative to how quickly the computer industry moves (still, in the post-Moore's Law era), I'd argue trade secrets would be preferable.

If someone did implement this into a chip that was successful, it would take a lot less than 20 years to reverse engineer how they did it, even if they took steps to avoid reverse engineering.

The patent is at least more innovative than Slide to Unlock. If you encourage crappy patents, they are gonna bite you some day.

I actually think slide to unlock is brilliant, specially the old skeuomorphic ”track on rail” one. The new, cleaner one is made possible by old one being burned into our collective unconscious.

I also think “one click to buy” is brilliant.

Since it's impossible to unambiguously distinguish between crappy and worthy patents, we should either abolish them, or have them last for a really short time, like a year or two. Plenty for the inventor to secure its return on the research's investment, not enough to stifle innovation.

>slide to unlock is brilliant

How? It's a digital slide bolt. They just digitized a simple mechanism that's been in use for hundreds of years.

Right, but using that in a human/computer interface was not possible/intuitive until capacitive touch screen was available, and even then it was a stroke of genius to "port" real life object interaction into the software realm.

They didn't patent the gesture, the patented its usage in the human/software interface context.

That is the only context in which that gesture makes sense.

You couldn't patent the steering wheel just because you stick it in something that's not a car. You'd have to use it for something that's not steering or wheeling. Slide-to-unlock is obvious because it's based on a bolt that you ... slide to unlock. There's no novelty in unlocking something, software or not, by solving such a simple 'puzzle' as moving your finger sideways. It's certainly not a stroke of genius.

  > even then it was a stroke of genius to "port" real life object interaction into the software realm
This is my first encounter with someone who thinks that "doing X on a computer" is a stroke of genius. If X is not patentable, "porting" it into the software realm shouldn't either.

You've apparently never met anyone from the patent bar. Their entire purpose nowadays is often to justify monopolies based on nothing more than doing it on a computer. [0] The judges of the CAFC -- the patent court -- make a deliberate effort to pose as innocent fools being astonished by the most basic applications of computers. [1] That's the root of the new power of the patent system.

Of course, the current subject proves that even legitimate worthwhile research can produce absurd and abusive patents.

[0] http://www.cafc.uscourts.gov/sites/default/files/opinions-or...


Something can be brilliant without being innovative. Most people have no problems operating a "slide your finger on the screen to do more" widget, even without being trained by Apple's original visuals.

Think about it. A touch screen can only detect a limited number of basic interactions. All interactions with the software must occur through those basic primitives. Touching the screen and moving your finger are really the only two things the interface can recognize.

I think you're conflating "brilliant" with "obvious".

It's like if someone was called brilliant for suggesting that we should turn our phone screens off when they aren't in use to save battery. It's just the obvious solution, and patenting it and enforcing that patent is just meant to create obstacles for competitors.

Apple thinks it own the very idea of the smartphone. They even claimed they owned curved corners... It is nice to see them get a taste of their own medicine.

It would be great if they won, just so we could use the precedent to make them lose all of the asinine lawsuits they start.

I think they might be synonyms. Finding the obvious could be considered the pinnacle of insight. Of course, once it is found, it is obvious, but until someone says it, it may not be.

I agree that finding the obvious should not entitle you to "own" it or allow you to prevent others from using the insight.

Anyone who does claim that (e.g. Apple) should be ridiculed mercilessly for it.

I think brilliant or even innovative should not be the same thing as patentable. The (possible impossible) standard should be "would someone else have come up with this." In both of those example, I think the answer is almost certainly yes.

Give 100 UI designers a touch screen and ask them to design a few unlock mechanisms each, I think you will get a slide to unlock pretty quickly. On click purchase (is this a troll) is basically saving billing and shipping details.

Here's the theory about what's supposed to be patentable:

Statutory material: No matter how brilliant, a poem, law of nature, mathematical algorithm or computer program is supposedly not patentable. Lots of details at http://www.uspto.gov/web/offices/pac/mpep/s2106.html

Novelty: It has to be new. This is where prior art comes in.

Non-obviousness: This is the test for patenability you're talking about. It can be hard to judge obviousness in hindsight. There are so many specialty areas today that it's unreasonable to expect the patent office to be able to determine what would be obvious to an ordinary practitioner in every area of invention.

I think it is pretty irrefutable that Jefferson would be against "but on a computer..." patents like slide to unlock based on the copious amount of writing he did on the subject of patents in his time. see, for example:


Is slide to unlock a clever UI mechanism to put on a phone? Sure. Should that entitle it to a 20-ish year monopoly as an idea irrespective of implementation? I don't see how a reasonable person comes up with an answer other than "of course not".

The entire patent system is so far twisted from what it was intended for, it is really quite a shame.

Luckily, patents aren't granted for ideas, but for specific executions. Granted, the execution is generally defined at a higher level than, say, "this exact chunk of silicon". But it's at a much lower level than the "idea".

When it comes to digital technology patents are often granted for ideas or concepts so general as to be ridiculous. Whether those patents stand up in court is another thing but even there -- as in this case -- there are no guarantees of sanity. I hope Apple will appeal.

The more general and broad the patent the more valuable it is. The system rewards big general patents.

In the case in question, there appears to be no implementation or execution by the UW folks, certainly not one appropriated by Apple. This looks like an "idea" about instruction scheduling to me.

5.1 Methodology The results we present have been collected on a simulator that faithfully represents a Multiscalar processor.


Dynamic Speculation and Synchronization of Data Dependences, Moshovos et al, Proc. ISCA-24, June 1997

This PDF explains what I discuss below in more detail: http://moodle.technion.ac.il/pluginfile.php/315285/mod_resou.... Prediction of aliasing is discussed on slide 25.

The patent in question pertains to an optimization of what these days you'd call "memory disambiguation." In a processor executing instructions out of order, data dependencies can be known or ambiguous. A known data dependency is, for example, summing the results of two previous instructions that themselves each compute the product of two values. An ambiguous data dependency is usually a memory read after a memory write. The processor usually does not know the address of the store until it executes the store. So it can't tell whether a subsequent load must wait behind the store (if it reads from the same address), or can safely be moved ahead of it (if it reads from a different address).

If you have the appropriate machinery, you can speculatively execute that later load instruction. But you need some mechanism to ensure that if you guess wrong--that subsequent load really does read from the same address as the earlier store--you can roll back the pipeline and re execute things in the correct order.

But flushing that work and replaying is slow. If you've got a dependent store-load pair, you want to avoid the situation where misspeculation causes you to have to flush and reply every time. The insight of the patent is that these dependent store-load pairs have temporal locality. Using a small table, you can avoid most misspeculations by tracking these pairs in the table and not speculating the subsequent load if you get a table hit. That specific use of a prediction table is what is claimed by the patent.

Maybe this is worth a patent, or maybe not. For what it's worth, I don't think anybody was doing memory disambiguation at all in 1996. Intel was one of the first (maybe the first) to do so commercially in the mid-2000's. Apple's Cyclone architecture also does it, and I think it was the first in the low-power SoC space to do it.

Alpha 21264 (also from '96) had load store buffers that would notice the dependence violations and flush the pipeline during speculative execution. Sparc and power also had this to some extent with write buffers. I can't think of any that used a predictor though to decide whether to execute speculatively or not back then, they all just either did or stalled on the first potential violation. The patent appears novel for the time to me, thank you for digging it up and explaining.

DEC was working on it, allegedly for EV8: https://www.cis.upenn.edu/~cis501/papers/store-sets.pdf. They cite to the inventor of the patent in question: "Independently, Moshovos et al. published a comprehensive description of memory dependence prediction. This is the first published work identifying that memory dependencies are problematic for out-of-order machines."

That's a great paper and insight, thanks again.

> Maybe this is worth a patent, or maybe not.

Maybe start with another question. What do you think the odds are that there was any kind of causal chain from the invention of this technique by the patent holder to Apple's use of the technique?

An early paper by the inventor on the technique is cited over 300+ in Google Scholar, including by Hennessy & Patterson.

Good to know. Maybe the patented is warranted. Can you also argue that the $862 million is appropriate?

UWisc has always been very aggressive with its patents. I recall sometime during 2002 or thereabouts, while working for a reasonably big semiconductor company with DSP/ARM processors, one of the guys in our team with an interest in computer architecture, used the company network to download and play with a simulator or something (might have been simplescalar). A few weeks later the head of our group gets contacted by the company lawyers saying that UWisc was asking for licensing costs for using their tools (they provided the ip address that was used to download the tools). I'm not sure how it was resolved finally, but I don't think the company paid.

In general, I welcome the day when universities get what is coming to them for this kind of stuff (see also: Marvell vs CMU for 300+ million, reduced from 1.5 billion on appeal, etc).

In particular, given how much industry funds them, collaborates with their professors, etc, what is going on now is a remarkably stupid approach mostly driven by tech transfer offices that want to prove their value.

Which will be "zero", once the tech industry starts cutting them off.

You think they're funding researchers at market rates?

Do you think the universities would really be getting anywhere without the help of the industries they are now pissing on?

Take away the faculty awards, industry collaborations, donated labs, donated computing time, hiring of interns, etc.

>The University of Wisconsin–Madison is a public research university

So it's a university [mainly] funded by the tax-payer. How can it be that the research of this university isn't in the public domain? The public paid for it, the public should reap the benefits without paying again.

Sure, Apple tries their hardest not to pay taxes, but the patent isn't limited to them.

To complicate things a little bit, Intel actually funded the research underlying this patent. This was the crux of the WARF vs Intel suit a few years ago — Intel argued they received a license to the patent as part of their grant.


"Intel had supported Sohi's research with about $90,000 in gifts in the 1990s and argued it was entitled to the intellectual property that resulted from Sohi's work.

However US District Judge Barbara Crabb laughed Chipzilla's argument out of court and ordered the case to trial.

She said that the funding terms did not give Intel the right to use patents resulting from the work. However she said that any infringement by Intel was not willful because the funding agreements were ambiguous."

> So it's a university [mainly] funded by the tax-payer

This is not true, at least for most states. For example, the UW system got 1.2 Billion from the state out of a 6 Billion dollar budget: https://www.wisconsin.edu/about-the-uw-system/

The large majority of most state university funding comes through tuition, research grants, and donations.

Well the majority of tuition funding is on loan from the federal government and I'm sure a big chunk of that grant money comes from DARPA, NSF, NIH, etc.

That's not necessarily to say their work should be in the public domain but it would still be nice to see them focus on more productive uses of their IP rather than just license fee extraction.

The federal government generally makes a profit on student loans -- (most of them) get paid back eventually.

True. My point was more that their budget would look very different if not for federal guarantees on those (relatively risky) loans.

And patents!

WARF (the designated rights holder for UW patents) gives an annual gift of ~$60M to the University, if I recall correctly.

I don't know. I kinda like the idea of public universities licensing their research and patents to raise money, and I think the public benefit there outweighs making the research public domain, by decreasing the amount of public funding needed, and/or lowering tuition.

I mean, let's be honest here, a patent like this is isn't particularly useful to the public because almost nobody outside of a few very large corporations can afford to implement it, and they stand to make a ton of money from it. The last thing Apple needs is publicly subsidized research.

You can thank congress for that: https://en.wikipedia.org/wiki/Bayh–Dole_Act

The reason for the act, I think, was to provide the incentive to commercialize the results of potentially valuable research. It's not an entirely loony idea

I guess the university can provide better services[1] to the public if it's profitable.

[1] Like an NFL-class stadium.

I'm not trying to be snarky but are you being serious ?

Because NFL-Class Stadium is not one of the features I would think of if I was doing a bullet list of things a university should have.

My 30k student university has a sports hall and a gym.

Some of the major college football programs have a net worth over half a billion dollars each. The University of Texas Leghorns (men's football program) brings in over USD$130 million a year. UT has 50K students.


Yeah, in the conferences with TV deals the sports programs are not a money sink. The Big 10 certainly has a TV deal (it even has its own cable network).

Also, just the ticket sales for one game at Camp Randall Stadium bring in ~$4 million.

Notre Dame's television deal with NBC was just extended to 2025, at $15 million per season.

That must be why tuition fees are so high and record levels of student debt is required.

What an upside down system!

Nobody pays the sticker price. This podcast explains more:


And those copyrights get enforced.

"Table based data speculation" - so a lookup table?? Would be interested in knowing how innovative the patent is from someone with more knowledge in this field.

Not really, although the technique employs a look-up table. The basic idea is that they found that many instruction sequences that cause a mis-speculation once tend to cause it again in the future. This is done in the context of ILP -- mis-speculation means that the processor thought an instruction doesn't need data from a previous one and it executed it in advance, but then it turned out that the result did depend on a previous one, so the first result was retired and the instruction was executed again.

In order to avoid this from happening too often, they added a(nother) feedback loop in the whole process: if a sequence of instructions seems to cause a mis-speculation often enough, it's added to a table, so that when it's encountered again, the control unit doesn't speculatively execute an instruction whose result is most likely going to have to be retired anyway.

I don't know if there's prior art to this, but the basic idea is fairly straightforward. I wouldn't be surprised to learn that IBM or DEC engineers knew about this prior to 1996.

The first (and often only) thing to read is the claims. Claim 1 for example:

1. In a processor capable of executing program instructions in an execution order differing from their program order, the processor further having a data speculation circuit for detecting data dependence between instructions and detecting a mis-speculation where a data consuming instruction dependent for its data on a data producing instruction of earlier program order, is in fact executed before the data producing instruction, a data speculation decision circuit comprising:

a) a predictor receiving a mis-speculation indication from the data speculation circuit to produce a prediction associated with the particular data consuming instruction and based on the mis-speculation indication; and

b) a prediction threshold detector preventing data speculation for instructions having a prediction within a predetermined range.

Bear in mind the patent was filed in 1996 when considering the level of innovation.

Having a look up table for branch prediction? If something happens a lot it might happen in the future. That doesn't even sound that novel.

There were branch-predicting mainframes back in the 80s, though it really came to the fore with the advent of superscalar microprocessors in the early 90s (MIPS R8000 and DEC Alpha 21064), what is the paper's innovations above these?

It's pretty clear from even a cursory look – the patent covers a second level of branch prediction, where the processor determines whether or not it should bother speculatively executing an instruction based on the previous outcome of that speculative execution.

Whether or not there was a prior implementation of this, I don't know – but it's also obviously more than a simple saturating counter branch predictor or whatever.


So inductive patent expansion. Take previous innovation, add 1, profit.

What about L0 and L4 caches? Or Renaming sets of renamed registers? The problem as others have outlined, that patents are not concrete enough. Simply describing a problem is enough to be granted a patent. The value of the description of most patents is zero, which afaik this opposite of their intended effect as a record and transfer of technology.

The blithe dismissal of the value incremental improvement doesn't make you look smarter.

Thanks for the bespoke advice.

None of those processors had anything like this.

The innovation of the UWM paper was the MDPT and DDST, then due to practical manufacture reasons merging them, and then studying the trade-off with a simulator to arrive at a very efficient system.

For comparison, here is the IBM patent for the bigger more expensive approach used in Power4:


This is not even related to branch prediction, it is a scheme for controlling load speculation. This means executing a load out of order, even if there are stores with unresolved addresses ahead of it. This paper isn't proposing load speculation, it's proposing a predictor when for when it should be done.

It seems to me that the ILP is able to begin execution of branches before the branch is known to be true.

    def func():
        if predicate():
In the above program, the processor would be able to speculatively start execution of task1 before the predicate function returned a value.

(obviously the processor does this on a much lower level than this code)

From what I read it is mostly branch predicition with data pre-loading on the CPU Level.

How is this journalism? It doesn't even tell you the damn patent number.

At the top of the gray bar there's a "Contact BBC News" link. (Not the "contact us" link at the very bottom of the page.)

For this article it's here: http://www.bbc.co.uk/news/20039682

Please do let them know that they need to start linking, or at least naming, documents that they're talking about. They do it all the time and I agree it's annoying. They'll discuss a medical study and not have any links to it. Sometimes they don't even name the report nor where it appeared.

When I see examples of that my nature is to immediately question the validity of the story.

This is how journalism operates in the internet. I don't think I've ever seen a report that had an actual working external link in it. I don't even click links in news articles anymore because they inevitably lead to some bullshit more news of this type on our site meta-page.

Given that only those currently affiliated with a university can read medical studies, I can't blame them too much for that one.

The portion of the BBCs audience who actually want to know the patent number is tiny, and the portion who would actually understand the patent itself is a tiny fraction of those, so why include it? Those who are interested can go out and find it themselves without much difficulty.

The job of a journalist isn't report every fact, it's to cut through the noise, take something that happened and condense it into something that their audience will read and can understand, without distorting it.

This article does exactly that. That's why it's journalism.

For the information content, this article could have been two sentences long.

No, it could have been one sentence long, which is why the opening section is only one sentence long. To quote directly from the article (the first sentence, in bold):

> Apple faces a bill of $862m (£565m) after losing a patent lawsuit.

It then goes on to describe what products the patent covers, when the patent was filed, what it does, what other companies have been sued infringing, what the outcomes was, what the likely outcomes is going to be in this case and the factors that are likely to have an effect. It then describes some recent related news which may be on interest.

It's a perfectly fine article for the audience it is aimed it. Most people don't give a shit about the details - not everyone reads HN.

The BBC have a very predictable style. Open with a very short (single sentence) summary that covers the main points. If the reader is interested, they read on, if not, they've already read enough to get the gist. As the article goes on, it will go into more detail, sometimes repeating what has been said earlier but going into more depth.

Stop reading BBC articles if you don't like their style, but that's what the BBC do, they are very good at it, and are respected world-wide because of it.

All I'm asking for is that they link to the patent (if it's a patent case) or the study (if it's a report on a study).

Have a look at this story: http://www.bbc.co.uk/news/health-34520631

That has an inline hyperlink to a previous story. It has a list at the end of the article to other articles about the same topic. And it also has a related Internet links.

So a link to the patent could be:

_The patent, filed in 1998_

Or it could be listed in related Internet information.

Most people aren't going to use it, but it saves time for everyone who does use it. And for medical studies it's probably important to get people into the habit of trying to read and understand them.

> It then goes on to describe what products the patent covers, when the patent was filed, what it does, what other companies have been sued infringing, what the outcomes was, what the likely outcomes is going to be in this case and the factors that are likely to have an effect.

Oddly enough, they got lots of these things wrong. Here's the patent in question: http://www.google.com/patents/US5781752

1. It was filed in 1996, not 1998.

2. The patent itself doesn't say anything about power efficiency (although perhaps that was argued at trial), so I'm not sure where they got that.

3. The article says it "relates to use of the technology in the iPhone 5s, 6 and 6 Plus - but an additional lawsuit making the same claim against Apple's newest models, the 6S and 6S Plus, has also been filed." That may be true, but it's so vague as to be pretty information-free.

The whole article reads like it was written by someone who doesn't know anything about patents, doesn't care to learn, and just uncritically copied information he read elsewhere. That isn't journalism.

> they got lots of these things wrong

> 1. It was filed in 1996, not 1998.

Ok they used 'filed' when they should have used 'issued' or 'granted'. That's technically incorrect although unlikely to matter to layreader.

> The patent itself doesn't say anything about power efficiency (although perhaps that was argued at trial), so I'm not sure where they got that.

It's argued explicitly in the complaint. Not sure how you count this as something they got wrong since they're right just because it's not footnoted to a level you're comfortable with.

> That may be true, but it's so vague as to be pretty information-free.

It is in fact true and I don't find it vague at all. What information do you think need to be included in an article pointed at the casual reader?

> That's technically incorrect although unlikely to matter to layreader.

Details are important. Getting them wrong makes a journalist look sloppy and incompetent.

> It's argued explicitly in the complaint.

Is it? I withdraw this point if so. At a quick skim, the patent doesn't seem to have anything to do with power efficiency, but if it's in the complaint, I can't fault BBC for mentioning it in the article.

> What information do you think need to be included in an article pointed at the casual reader?

I dunno. Maybe this is the right level of detail for the casual reader. It seems extremely, uselessly broad to me though.

I'm curious how the university could discover that Apple was using its patent. The internal characteristics of the processor must be secret, right? Do they examine die photos and reconstruct the gate netlist?


There's a discovery process for civil cases.

So the university goes through this process with all semiconductor companies that are developing processors, just to be sure that they're not infringing?

I have one question: Do the professors teach this technique in classes?

I mean, that'd be funny, right? Teaching students something that you patented, waiting a few years for them to go into industry and apply what they learned, then suing them for it.

I have no sympathy for Apple in this matter. Considering the worthless, prior art ridden patents they used against their competitors they deserve the blowback. And in keeping with their modus operandi they ignored the University of Wisconsin and wilfully infringed the patent.

UW should take the money and use it to endow a chair of processor engineering.

Or maybe a table of processor engineering. That could work too.

$862m isn't that huge in the grand scheme of things. Not to mention, it's most likely not going to be $862m, my guess is it'll be less.

It's more than double what Apple paid for P.A. Semi and Intrinsity put together, the fabless semiconductor design firms that are the foundation of Apple's processor engineering capabilities.

That's really more an indictment of how little PASemi and Intrinsity sold for (in a world where WhatsApp sells for $20 billion).

I take your point, but if I were to pick any of those valuations as being adrift from reality it wouldn't be those for P.A. Semi or Intrinsity.

As you said: P.A. Semi and Intrinsity are "the foundation of Apple's processor engineering capabilities." That might have been their market value, but I think it's fair to say that the value to Apple was far greater (i.e. Apple got a huge purchaser's surplus in the deal).

Yup, no argument there. They've converted that talent into a significant advantage for their platforms.

>$862m isn't that huge in the grand scheme of things.

Well, it normalizes large patent payment damages which isn't good and guarantees that non-giant companies will never, ever be able to compete in this space because investors are spooked over random near $1b lawsuits.

Big companies seem to not mind the patent status quo for some reason. I suspect it just keeps competition away by raising the barrier to entry. This should be concerning to all. The refrain of "Apple can afford it" is scary as Apple is the world's wealthiest company. Of course they can afford it. That's besides the point.

If it's $862m, last I checked, it will take them about 30 hours to bring in the necessary revenue--if they want to pay out of profits (I'm totally guessing here, what are real net margins on their hardware? 15-20%?), it'll take a couple days.

Man, that's such a humbling stat. Many of us here don't even earn that in a year!

$862 million?

> Man, that's such a humbling stat. Many of us here don't even earn that in a lifetime!


Unless I missed the /s ;)

Even more exaggerated than that. Many families don't earn that across several generations. That's about 17,000 years at $50,000/year. At 40 years of working life, you'd need over 400 people.

The combined income earned by anyone in the entirety of history who is the least bit related to me, adjusted for PPP, probably does not equal $800MM+.

Is this a weird meta-joke where you pretend to miss the joke?

We're also talking about one of the largest companies in the world, so it's really not that shocking.

And Apple saves a lot of money by not paying taxes /s

Oh it's just Apple?

No of course not. There's also Facebook, Amazon etc. etc. etc. But the article is about Apple.

Don't forget UWisc!

this is the best! I certainly don't make this much in a year.

It would take ~8 days of net income (they declared $10 billion of net income in the most recently reported quarter).

Given that large companies face large numbers of patent suits however, typically litigating several in parallel, spending all this money on just one could indeed be seen as distressing.

If it keeps smaller companies out of the market because they are not able to join patent cross-licensing cartels, then it may be a price worth paying.

$862m? Isn't that little bit less than what Samsung paid to Apple?

I tell ya, this tech news stuff is sometimes more entertaining than infotainment on TV.

Awesome, maybe the Brewers need a new stadium too.

What if patents could only be held by individuals and not corporations?

And then the individuals would grant use licenses to the corporations, so they can implement the process/manufacture the good/etc?

That is essentially how it works now actually. The patent has to be awarded to an inventor (or more than one) who is a natural person. But they initially assign it to a company, when it's done by someone working for that company.

I suppose you’d have to fix that “American corporate personhood” problem first

Corporate Personhood is an intentional design goal of US law.

The earliest case I'm aware of dealing with "corporate personhood" is The Rev John Bracken v. The Visitors of Wm & Mary College, from 1790, which is discussed in the linked article [0] (page 434 discusses the founder's wishes, treating the corporation as an extension of the will of its founder, through its charter.) The rest of US case law, long before Citizens United, follows this same pattern -- corporations, being "merely associations of individuals united for a special purpose" [1], function legally as individuals in many ways, with the right to speak and own property and other such things that fit with the purpose for which they are founded.

A relevant quote from US case law: "The principle at stake is not peculiar to unions. It is applicable as well to associations of manufacturers, retail and wholesale trade groups, consumers' leagues, farmers' unions, religious groups, and every other association representing a segment of American life and taking an active part in our political campaigns and discussions .... It is therefore important -- vitally important -- that all channels of communication be open to [all of the above types of associations] during every election, that no point of view be restrained or barred, and that the people have access to the views of every group in the community." [2]

[0] http://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=23...

[1] Pembina Consolidated Silver Mining Co. v. Pennsylvania, 1888 https://supreme.justia.com/cases/federal/us/125/181/

[2] United States v. Auto Workers, 1957 https://supreme.justia.com/cases/federal/us/352/567/

I don't get how they settled out of court and then did it again, that seems really bizare.

Settled with a different company.

Article says it was UW in both cases.

edit: disregard, I don't read good

UW holds the patent.

UW sued Intel. This lawsuit was settled.

UW then sued Apple. A different company.

Why would the fact that the suit with Intel was settled impact their ability to sue Apple? What's the source of confusion here?

Whoops, I can't read this morning.

Oh duh. whoops

This is sort of a depressing precedent. Do we really want to turn our universities into patent trolls?

UW isn't a patent troll. It does cutting edge research and it expects a cut for pushing technology further.

Typical trolling involves finding a dogshit patent and then extorting companies.

This is exactly the reason patents exist. To allow inventors to exist separately from manufactures.

A research university does research that can lead to patentable inventions. Organizations which invent things really aren't patent trolls.

It has already happened. As was pointed out, it is a direct result of the Bayh-Dole Act [1].

[1] https://en.wikipedia.org/wiki/Bayh%E2%80%93Dole_Act

If they are trolling other patent trolls, then why not?

Only on HackerNews is Apple and Intel labeled patent trolls.

Two wrongs don't make a right.

A self-destructive spiral might spur legislative action.

Sometimes it does. Fight fire with fire, an eye for an eye, history is replete with references to common sense refuting what you are saying.

You have to fund that stuff somehow, the state governments certainly aren't doing it anymore.

Its too late to some extent. Though patent situation has improved a little recently. The (licensed patents) that a startup I worked at had were used to sue google when the startup failed. The suit was started by the company but finished by the university.


You know what they say: Live by the patent sword...

Why doesn't Apple start lobbying for real patent reform?

Because it makes no sense for them to do so. Large companies join in cross-licensing agreements, keeping smaller competitors out. Occasionally some small player comes along with an important patent, and they refuse to be bought off and win in court. That makes the news like today because it happens so rarely. But it's a price worth paying to keep competition out of the market.

Competition is so tedious.

This isn't a small company with a new idea that doesn't want to be bought out, this is an university.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact