
Mill Computing in 2017 - reitzensteinm
https://millcomputing.com/topic/mill-computing-in-2017/
======
notacoward
For the uninitiated, the bringup of a new processor is very complex - from
compiler to libraries to kernel, from emulation to real hardware. The process
described here is very similar to the one I was able to observe at SiCortex,
though that chip was based on an existing one and so there was no FPGA stage.
I'll bet they even use some of the same tools, such as the emulator. Kudos to
them for not rushing ahead before their foundation is ready, and I look
forward to seeing how this all works out.

~~~
CalChris
_Bringup_ is when you throw everything together and that means that you have
everything to throw together. It's like that Johnny Cash song, _One Piece At A
Time_.

    
    
      Now, up to now my plan went all right
      'Til we tried to put it all together one night
      And that's when we noticed that something was definitely wrong.
    

That's _bringup_. Compiler development ain't part of bringup. They
could/should have been doing an LLVM backend using a software simulator (and
eventually an FPGA simulator) which is the traditional approach handed down
from our knuckledragger forefathers and going back decades.

Still, I truly wish them well. It beats the crap out of 99% of the YC startups
I see. The 37th recommendation engine, a webstore for sneakerheads, ...

~~~
notacoward
The bringup process includes all the stuff leading up to that final moment,
just like most other processes - e.g. release process - include the stuff
leading up to a conclusion. The quibble really wasn't very constructive.

~~~
CalChris
[https://en.wikipedia.org/wiki/Integrated_circuit_development...](https://en.wikipedia.org/wiki/Integrated_circuit_development#Bringup)

"After a design is created, taped-out and manufactured, actual hardware,
'first silicon', is received which is taken into the lab where it goes through
bringup. _Bringup is the process of powering, testing and characterizing the
design in the lab._ "

~~~
notacoward
Well, sorry, but I worked with a whole lot of people who were designing,
verifying, and building that processor, and they used "bringup" for the entire
process. I was part of that process myself. Actual experience counts for more
than argumentum ad wikipedia. Also, even if the word was wrong, that matters
far less than the context in which it was used. Dictionary flames are the last
refuge of someone with nought worthwhile to say.

------
unwind
My only exposure to Mill (besides this page link) has been random comments in
various stories here over the years.

They typically start with ("Mill team"), and then the rest sounds like it came
from the future. It's all about getting the flux capacitors charged.

Can anyone post some low-level code (in assembly, or the closest equivalent if
that term is no longer relevant with Mill) implementing some well-known simple
algorithm on the Mill? Just to quickly get a feel of what it's all about, how
it looks.

Something like computing n! for word-sized integers, or strcmp(), or whatever.

Does anyone have a quick link to something like that?

~~~
Solarsail
Well, there's a couple of pieces of sample code on comp.arch in google groups
/ usenet. Sadly, I can't find any examples of genAsm anywhere, but a couple of
fragments of conAsm. Maybe that's better, as conAsm is what's going to
actually run.

This tread for example, talks about instruction density of Mill programs. Mill
does somewhat poorly, with a pre-alpha quality compiler doing the codegen;
[https://groups.google.com/forum/#!topic/comp.arch/RY3Bk7O61u...](https://groups.google.com/forum/#!topic/comp.arch/RY3Bk7O61uA)

Comparing several ISAs on the simple program in the first post, a Mill Gold
CPU binary weighs in at 337 bytes (and 33 instructions), compared to:

* powerpc: 212 bytes, 53 instructions * aarch64: 204 bytes, 51 instructions * arm: 176 bytes, 44 instructions * x86_64: 135 bytes, 49 instructions * i386: 130 bytes, 55 instructions * thumb2: 120 bytes, 51 instructions * thumb: 112 bytes, 56 instructions

~~~
igodard
GenAsm for the factorial function (conAsm in a different reply, above): define
external function @fact w (in w) locals($3, %9, &0, ^6) {

label $0:

    
    
        %1 = sub(%0, 1) ^0;
    
        %2 = gtrsb(%1, 1) ^1;
    
        br(%2, $1, $2);
    

label $1 dominators($0) predecessors($0, $1): // loop=$1 header

    
    
        %3 = phi(w, %5, $1, %0, $0);
    
        %4 = phi(w, %6, $1, %1, $0);
    
        %5 = mul(%3, %4) ^2;
    
        %6 = sub(%4, 1) ^3;
    
        %7 = gtrsb(%6, 1) ^4;
    
        br(%7, $1, $2);
    

label $2 dominators($0) predecessors($0, $1):

    
    
        %8 = phi(w, %0, $0, %5, $1);
    
        retn(%8) ^5;
    

};

Higher-end Mill models have more and bigger things to encode, and so a given
program will occupy more bytes than it will on a lower-end member. Thus a belt
reference on a Gold is 6 bits, but only 3 on a Tin.

------
sporkenfang
Silly question: does this company retain full-time people, or is their work
primarily done by folks employed elsewhere? Maybe I just haven't reached
whatever point in life where a salary is no longer necessary, but I was
surprised by their About page.

~~~
seibelj
My understanding is that it's a group of enthusiasts and volunteers who are
donating sweat equity out of enjoyment and the gentlemen's agreement of full-
time employment once the company really gets going.

------
fithisux
Instead of bashing Mill, the true question is what we open source developers
can do to help them?

Implement algorithms? Improve FOSS tools?

@Mill team,

can you set up a list of FOSS projects that we can contribute in order to
help? I specialize in implementing algorithms, especially optimization and
other stuff.

~~~
igodard
See [https://millcomputing.com/#JoinUs](https://millcomputing.com/#JoinUs)

NDA and sweat-equity agreement required; you get full-vested long-term options
monthly; expected to be actual stock monthly after the next round.

To help in a FOSS context: we are not yet ready to put out an SDK to the FOSS
community, or to anyone really. That will wait until after our cloud
environment is up - and a few more patents have been filed (the ISE exposes
things we want to protect).

In the meantime, the most help would be to support existing microkernel
operating system efforts such as L4
([https://en.wikipedia.org/wiki/L4_microkernel_family](https://en.wikipedia.org/wiki/L4_microkernel_family)).
The Mill will have a big impact on conventional OSs like Linux, but those
suffer from built-in assumptions that open security holes and makes the OS a
dog that the Mill can only train a little. The big win is in microkernels,
which can take advantage of the Mill's tight security and ultra-fast context
switching.

Or support languages that have central micro-thread concepts, such as Go
[https://en.wikipedia.org/wiki/Go_(programming_language)](https://en.wikipedia.org/wiki/Go_\(programming_language\)),
for the same reason.

But whatever you do, don't tune the microkernels or microthreads for a
conventional core; instead, do it right, and we'll be along to help eventually
:-)

------
pja
I’m slightly surprised that they didn’t have an FPGA implementation running as
proof of concept already.

Regardless, I’m looking forward to seeing what MillComputing can deliver in
the real world.

------
riazrizvi
According to the CTO, 80% of software operations are loops
([https://youtu.be/QGw-cy0ylCc?t=454](https://youtu.be/QGw-cy0ylCc?t=454))? I
need convincing of that before I watch the rest of the talk. I suspect it
would be a fundamental premise for the architecture of the chip. If it's not
true, the stats are probably skewed for an unusual benchmark.

~~~
kainolophobia
100% of software operations are loops, it's just that some of them are only
run once ;)

Joking aside, computers help us handle loops more efficiently; that's it. The
20% "non-loop operations" are just the fabric that tie the loops together.

------
pawadu
I haven't followed mill after attending a few lectures years ago.

Does anyone think this is commercially viable? Or are we looking at a new
Transputer?

~~~
jlouis
The Mill architecture has lots of ideas which are really good on paper. We've
had the same situation before, around 2000 with Itanium. There were many parts
of that CPU which looked good on paper. In hindsight, the Itanium was too
complex.

Commercial viability of the architecture depends on how X86-64 and ARM fares
in the coming years. If they stagnate and can't produce faster single-CPU
cores _and_ people _continue_ writing software which lives in a single-core
world, then there is a room for a CPU delivering a 10-fold increase in
performance per watt per dollar.

If on the other hand, most of the heavy computation are moved into either
SIMD-style GPUs, or MIMD style Adapteva (Parallela) like solutions, where you
have a thousand cores, then the Mill is less likely to have success.

I'm optimistic that someone are trying to rethink how to handle CPU design
from the bottom up. It is a gamble, but if we don't try, we won't really learn
if there is anything better out there than the current generation of out-of-
order executing chips.

I'm pessimistic because Mill smells of Itanium. It is a mix of many new,
unproven, ideas inside a single solution. Some of the ideas are genuinely good
and might stick if thrown at a wall. But if you are gambling on having many
new ideas at the same time, it is likely that some of them will turn out to be
bad ideas.

I'd place the majority of my money on simple ISA designs, such as ARM or
RISC-V, preferably in a many-core design, but I'd hedge a good sum on Mill as
well.

~~~
XorNot
The problem with the Mill is its not clear to me what advantage it can hope to
seize over the behemoth which is Intel. If the market opens up enough to
justify new architectures, then it's also opening up enough to let Intel pivot
into that space and unlike everyone else they own the fabs to do it.

~~~
peller
This is drastically over-simplified, but in short, the Mill is a sort of
middle-ground between superscalar implementations (x86, ARM) and VLIW
(Itanium, DSPs).

Superscalars leave a lot of the optimization process to complexity in the
hardware. This is seen in stuff like the out-of-order scheduling and large
cache hierarchies. In one of the Mill talks, it's guesstimated that almost 90%
of the circuitry is dedicated to simply moving data around, as opposed to
actually doing any work on that data.

VLIWs take the opposite extreme, leaving the complexity of optimization to the
compiler. History has shown so far though, that many computing problems are
(again oversimplified) too conditionals-heavy to really benefit from VLIW, and
end up running much more slowly.

So the Mill is a bet that they can get more benefits from each approach
without the major draw-backs of each. This isn't really ground Intel could
simply "move into" without half a decade plus of work, and even then, they'd
be cannibalizing their x86 ecosystem, which is not a risk most entrenched
corporations are fond of.

~~~
XorNot
But that's the point: if I want to buy non-Intel at my company, I've got
contracts, lawyers fees, upper-level management etc. to convince to do it.
Intel is probably sending me support personnel and sample hardware.

And I'm _still_ looking at recompiling, debugging and deploying a ton of
software to see advantages from the Mill.

So whatever advantages it brings, they need to be _very_ substantial (i.e. if
the next-gen of Intel x86 chips still out-perform it, I'll buy them) and quite
quick - because I can go 5 years not buying the Mill while Intel promises to
support me for their awesome new architecture. I mean, probably we buy some
Mill machines, but how likely is it that it's game changing on my codebase?

That's where I see the problem. There's this whole huge assumption that the
Mill will yield a bunch of benefits. If they're clear cut (a _huge_ if) then
they still have to beat their competitors being able to brute-force
performance improvements until they come up with a new architecture
themselves, which can take advantage of the very compensation they're asking
from their customers ("switch architectures, it'll be great we promise").

~~~
Symmetry
There are a few places I could see Mill making inroads.

The security features could be very useful for the big internet companies,
Amazon, Google, Facebook, etc and they can afford to spend a few tens of
millions of dollars on something speculative like this. And they do, with
things like Arm or OpenPower servers.

There's also the high end embedded land where you don't need to run a
traditional operating system. Network switches, cell towers, that sort of
thing.

------
WhitneyLand
It's exciting to think about a 10x improvement in CPUs. This would literally
change our lives.

For anyone considering getting involved or investing could you fill in a
couple of bio details?

>Ivan Godard has designed, implemented or led teams for...an operating system,
an object-oriented database

Cool, which OS and database?

>taught computer science at graduate and post-graduate level at Carnegie-
Mellon University

Which courses?

~~~
codezero
Someone asked something similar a while ago:
[http://blog.kevmod.com/2014/07/the-mill-
cpu/](http://blog.kevmod.com/2014/07/the-mill-cpu/)

Ivan responded to the post but didn't address the questions about his bio at
all, but perhaps he doesn't feel the need to.

For the curious, his name apparently was Mark Rain for some time in the
70s/80s [1] – there are a number of publications under that name.

[1]
[http://newsgroups.derkeiler.com/Archive/Comp/comp.arch/2012-...](http://newsgroups.derkeiler.com/Archive/Comp/comp.arch/2012-08/msg00444.html)

~~~
WhitneyLand
Why would he not feel a need to?

All of us have to maintain detailed bios when seeking funding, trying to build
a team, or building corporate partnerships, all of which appear to be on the
table.

~~~
igodard
Our industry is a surprisingly small pond, at the top anyway. No one would
invest in the Mill based on my formal bio "College dropout; never took a CS
course in his life" :-)

Instead they invest based on our technology, most of which (and eventually
all) we make publicly available. You may not be able to judge it, but any
potential partner has people who can. One of the things that encouraged us in
our long road is that the more senior and more skilled the reviewer, the more
they loved the Mill. Quotes like "This thing is the screwiest thing I've ever
seen, but it would work - and I could build it".

~~~
WhitneyLand
I don't think many people care about a degree. It's just weird that you ask
interested people to join you but seem reluctant to give details on your bio.
A lot of people factor it in when deciding on joining or dealing with a
startup.

~~~
igodard
Judge us on the tech, not on me. To be honest, if you cannot already
understand what we have put out to the public well enough to know that you
want to work on it, then you are probably still too junior to be really useful
as we are now. We can't afford the cost of ramping people up if they are not
already there.

As we grow there will be more place for beginners, but not yet. Mind, that's
beginners as in concept understanding, not beginners as in age or degrees. I'm
a dropout, and this year we added an intern in Tunisia who's still finishing
his exams. We let people self-select, we don't try to persuade them. And we
don't pay them, so only the convinced join.

In a way, at the top of engineering things work much more like they do in the
arts: you are judged by your portfolio, not by your background or education.

~~~
pc2g4d
"Judge us on the tech, not on me." Hans Reiser and ReiserFS taught everyone
that the person's bio _does_ matter for the project.

~~~
codezero
I don't think anything in his bio or not in his bio conveyed that he would
murder his wife and go to prison for life thus abandoning the project.

------
lacampbell
I am glad this is still going - if only for the reason that different computer
architectures fascinate me. IIRC they said the belt machine thing maps well to
SSA - you specify your operands by position, but the destination register is
implicitly the back of the queue - in kind of the same way that each "a op b"
in SSA is saved to a new variable.

[http://millcomputing.com/wiki/Belt](http://millcomputing.com/wiki/Belt)

~~~
greglindahl
Ivan is a compiler guy, and I've always seen a huge impact of "compiler guy"
thinking on Mill -- in a much smarter way than how "compiler guy" thinking
sank Itanium.

------
em3rgent0rdr
> "We considered going out for our next funding round round earlier, but
> decided to wait until our technology was confirmed by issued patents"

An example of patents slowing down innovation rather than promoting it.

------
Nullabillity
> So far our patent experience has been excellent. We have successfully
> refuted all the prior art cited by the Examiners. While we have rephrased
> our applications to suit the Examiner in several cases, the substance of our
> claims have been allowed without exception. The Mill is truly novel – and we
> now have the USPTO imprimatur to confirm that.

So it won't be useful for another 20ish years. Yay?

> It will, naturally enough, break. However, by hosting it ourselves we can
> capture what failed, and use that as raw feed for our test/debug team.

Wow, this is gettting more and more depressing.

~~~
wyager
Not sure about the patent stuff, but "cloud"-only compiler hosting is an
absolute show-stopper for me.

~~~
al2o3cr
At a minimum, you'd need to spend a stack of cash verifying that the EULA of
the cloud compiler didn't include a "ALL UR INVENTIONS ON THIS ARE BELONG TO
US" clause - would be only prudent given how intent the _company_ is on
planting its flag in IP.

~~~
igodard
Such paranoia is amply justified in our industry. However I expect that we
will continue our iconoclastic approach to legalisms so long as the founders
keep control. Our model for our cloud software follows the example pioneered
by Greg Comeau; you can see his at
[http://www.comeaucomputing.com/tryitout/](http://www.comeaucomputing.com/tryitout/).

