Hacker News new | past | comments | ask | show | jobs | submit login

Even if Mill Computing can't get off the ground, eventually the patents will expire and someone will pick up on the ideas.



I'm pretty sure Mill computing will never get off the ground. If they were intent on producing something they would've started by now. The only thing they've worked on from the beginning is patents. And since now the focus is on RISC-V due to its openness, Mill will likely not see success unless they go the same route, because people don't want a locked down processor.


Mill is not an alternative to RISC-V; their insn set is a lot closer to hardware-specific microcode (much like VLIW architectures in general) than a truly general-purpose ISA. RISC-V admits of extensions, but aside from that it's fully general and most use of RISC-V will target standard ISA configs.


Even if you intend to have a completely open project patents aren’t necessarily a bad thing.

Patents prevent others form parenting it, even if prior art exists someone can still get a patent and the legal battle to revoke it can be costly if you don’t have large corporate backing.

So getting patents and offering them under an open FRAND license is a good option, some of the most common standards we have work this way.

Ofc if the only reason you get patents is to commercialize them later it’s going to be quite hard to get people to adopt your stuff.


It's a shame they haven't got anything to market. It's such a clever re-think of how instructions operate. If anything, they are probably too far ahead of their time.

Reminds me a bit of Transmeta. "Code morphing" sounds very much like the instruction reordering that's standard now.


Transmeta was an ISA emulator + JIT, and it did ship (it's used by Tegra's "Project Denver" CPU cores).

Doing an emulator/JIT at this level has some severe problems & a very sketchy security story. It's unlikely that anyone else wants to do this, and "AOT" solutions like Rosetta 2 are almost certainly the better approach to ISA flexibility.


Mill's approach is pretty similar to Rosetta in this regard - binaries are distributed in a relative high-level abstract format that's a bit like LLVM IR, and code generation for a specific chip's concrete ISA is done at/before installation time.


I watched Goddard's lectures a few years ago, and he talked about different price points for the mill--'tiers' of processors that differed mainly, (iirc,) by the size of the belt. He made some claims about porting binaries from one tier to another without compilation from source, and it struck me as a very optimistic claim, for the reasons you hinted at.

My bread and butter work isn't very close to the metal, but you sound more experienced with this sort of thing. Are you familiar with the mill? Do you think they have a chance of avoiding the weeds that Transmeta got stuck in?


I'm not familiar with mill at all, but the problem of doing a JIT at this level is where do you store the result and how does the JIT actually run? The CPU can't exactly call mmap to ask the kernel for a read/write buffer, fill it with the JIT result, and then re-map it r+x. So you get things like carveouts where the CPU just has a reserved area to stash JIT results, which is then globally readable/writable by the CPU. Better hope that JIT doesn't have an exploit that lets malicious app A clobber the JIT'd output of kernel routine B when app A gets run through the JIT! Also not like the kernel is aware of the JIT happening, either, as the CPU can't launch a new thread to do the JIT work. So as far as all profiling tools are concerned it looks like random functions just suddenly take way longer for no apparent reason. Good luck optimizing anything with that behavior. And the CPU then also can't cache these results to disk obviously, so you're JIT'ing these unchanging binaries everytime they're run? That's fun.

Maybe all of that is solvable in some way, sure. CPUs can communicate with the kernel through various mechanisms of course, you can build it such that the JIT is a different task somehow that's observable & preemptable, etc... But it's complicated & messy. And very complex for what a CPU microcode typically is tasked with dealing with, for a benefit that seems quite questionable. It's not like there's any reason a CPU doing the JIT is going to be more optimal than the kernel/userspace doing the JIT - it's trivial (and common) to expose performance counters from the CPU, after all.

That doesn't mean a CPU designed for a JIT is inherently bad, it just means doing this at the microcode level like Transmeta was doing is a bad idea.


Probably the patents are part of the reason it hasn't gotten off the ground?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: