Hacker News new | past | comments | ask | show | jobs | submit login
Compiling my own SPARC CPU inside a cheap FPGA (thanassis.space)
266 points by ttsiodras on Oct 20, 2019 | hide | past | favorite | 127 comments

For those interested in hacking in this:

The Pano G2 FPGA is a monster, but prices on eBay have gone up a lot. My cheapest buy was 25 of them for $85 (including shipping!). They now go for around 1 for $30 if you’re lucky... or $200+ for many.

The Pano G1 (with VGA instead of DVI) is cheaper but has a much smaller FPGA, though still large by hobby standards.

The benefit of the G1 is that all interfaces are working now, including DRAM, USB, Ethernet.

Last week, Skip Hansen got a full CP/M system running on one: https://github.com/skiphansen/pano_z80

USB on the G2 is hard. A bunch of people have tried and failed.

OP here - just a note, that "tverbeure" is the Tom I mention in my blog post.

None of this would have been possible without you, Tom! Thanks for everything.

Well you two owe me $32 :-). Kidding of course but the G2 looks like a cool gizmo and so I got one.

I'm wondering if my Scarab code for the LX6+ will move over.

What's the Scarab and LX6+? The closest I'm finding is an ESP32 beetle which has a Tensilica LX6 processor.

My bad, it is this one: https://www.scarabhardware.com/minispartan6/

I built a frame buffer for a Cortex-M4 on it.

I never was a fan of the Zilog Z80 (its instruction set reminded me too much of the idiotic intel 80x86 family), but seeing someone running the Control Program for Microcomputers on a whopping 25 MHz Z80 is incredibly cool... just imagine what kind of scene demos could be coded on this monster for the next demo party...

I wonder if it would be possible to retrofit an MMU onto a Z80 design and port a real UNIX to it? 25 or even 50 or even a 75 MHz UNIX server on a Z80 processor, that'd be a nice perversion...

> [the Z80's] instruction set reminded me too much of the idiotic intel 80x86 family

That is because the z80 was a (mostly) binary compatible superset of the 8080 instruction set, and the 8086 (while not binary compatible) was intentionally modeled after the 8080 such that there was a program that could take an 8080 program and do the 1:1 instruction mapping from 8080 to 8086 and get a working 8086 program.

> I wonder if it would be possible to retrofit an MMU onto a Z80 design

Zilog made a number of Z80 follow-on CPUs, and some have MMUs. It is amazing how far they took the instruction set; the Z80380 could run Z80 code as-is, but it also extended instructions to support 32b operations and had a mode with a flat 32b linear address space. I don't recall ever hearing any consumer product using it though.

Z80380 user’s manual, for those interested: https://www.manualslib.com/manual/1237352/Zilog-Z80380.html

This guy ran Linux on an ARM emulator on an ATmega. A similar feat should be possible on a Z80.


I remember that. That was emulation and it was not usable the way a 75 MHz Z80 could be, running code directly on the CPU (even if it is an FPGA).

If you want to see a Unix like OS on an 8-bit CPU, get a Motorola 6809. It was designed with high level languages in mind (four 16-bit index registers, two 8-bit accumulators that can act as a single 16-bit accumulator) and can even do position independent code quite easily (PC-relative addressing modes). Quite an underrated CPU.

You could run something like Inferno without an MMU: "Limbo programs run safely on a machine without memory-protection hardware". It would probably be more efficient than emulating some other architecture. Arguably, it'd be better than Unix :p

FUZIX is a tiny UNIX that runs on a Z80 using memory banking rather than an MMU.


I’m confused. I did a bit of Googling, and it appears that Pano Logic used Xilinx FPGAs. I’m guessing the Pano G1/G2 contains a board that the Xilinx FPGA is housed on?

At first, I was extremely impressed that a 50 employee company could design their own FPGA chip.


Here are the internals of the G2: https://tomverbeure.github.io/pano/logic/2018/12/02/Pano-Log...

So, Rev C is G2 [Edit: nope]. What's the cutoff between G1/G2? Is "Rev B" G1 or G2?

(Granted, I mostly see Rev C.)

The revs that are posted on eBay are useless, even though that’s what’s advertised. Rev C is usually G1, rev B usually G2. But not always. ;-)

The only reliable indicator to check which one to buy is looking at the pictures: VGA is G1, DVI is G2. (Some G2 are photographed with a DVI to VGA dongle plugged in, adding to the confusion.)

There are 2 G2 versions: one with an LX100 and one with LX150. You can’t know which one you’re buying...

LX100 is still a gigantic FPGA for hobby stuff.

Ah. Most of the devices on sale on ebay have VGA connectors.

Found one with DVI and just bought it, @ $22.95.

Thanks for the advice!

I just want to second this! I think I just ordered a LX100 for that price (plus 25US$ for shipping to ol' Europe).

//edit: Next thing tomorrow: Order the stuff necessary to program this (or check if I can use company equipment).

There are different options: my preferred one is to use something like this: https://www.aliexpress.com/item/32940620936.html

If you're willing to forgo the Xilinx special features such as ChipScope, you can also use OpenOCD and pretty much any OpenOCD compatible JTAG cable to upload a bitstream into the FPGA.

That’s an excellent price.

Have fun with it!

How much would it cost to buy hardware equivalent to the Pano G2 as new?

For a complete SPARCstation 5 implementation able to run SunOS, Solaris, BSD, Linux or NeXTstep, see http://temlib.org/ - this fits on a Spartan6 XC6SLX45T FPGA, so you should be able to get it to work on the larger Spartan 6 FPGAs (if you can manage to build a working memory controller...).

It seems that the Panos are only available on eBay US, any idea where to get them in Europe without all the shipping and tax hassle?

That's one way to get a computer where you're actually sure what runs on it.

You have to trust the FPGA, its toolchain, and any peripheral hardware and firmware with priveleged access.

That's true in a technical way but it is false in a practical one. An effective attack using compromised FPGAs or toolchains would be super hard to carry out undetected because of the decree of scrutiny the output would receive, besides, the attack itself would have to make substantial assumptions about the way the FPGA would be wired into the resulting circuit. I'm not saying it can't be done but it would extremely hard to carry out and I'm not aware of any such attacks ever being discovered in the wild.

> any peripheral hardware and firmware with priveleged access.

That would be a more feasible vector. But it all still will be much more secure than your average computer with a BMC.

"An effective attack using compromised FPGAs or toolchains would be super hard to carry out undetected because of the decree of scrutiny the output would receive"

FOSS vulnerabilities countered the many eyeballs argument a long time ago. There's even fewer people who know how to review hardware for flaws. I'm assuming it would be a targeted attack by default. That raises the bar. However, they could also leave a trigger that looks like a hardware flaw in the I/O interface. Intel has basically been doing that subversion with their ME flaws for some time now. Then, the only targeted part is just how to aim what's already there.

"That would be a more feasible vector. But it all still will be much more secure than your average computer with a BMC. "

True. Especially if you use an architecture like crash-safe.org or Cambrige's CHERI. I already advocate secure CPU's on FPGA's with dumb-as-allowed hardware if one can't get actual silicon. Also lets you throw in extra reliability features, too.

Cool project. I like the blogposts and manual that explain the inner workings.

Great article, I hope it helps motivate more people to give hardware design a try.

the truth is that most of the HW designers I know are editing inside their Vendor-provided IDEs.

Maybe true for FPGA designers, but not for ASIC designers in my experience.

Another crazy difference I experienced was that builds are NOT deterministic

Yes, hardware generation (synthesis, but mostly optimizations, placement and routing) are not deterministic. SW people are starting to experience that phenomenon with ML as well: you don't fully control what you get, but it works.

With HW generation given the same HDL and tools you should be able to get reproducible builds if you provide the same seed at the very least.

In theory, you should. In practice, don't count on it.

This article is the perfect summary of everything that's wrong with the existing HW development toolchains for FPGAs.

The best bit is the windows-only version of the Xillinx gooware that in fact installs a Linux virtual box on windows to finally get to run the tools it needs.

Oh, and yeah : there's a lame protection in there that checks it's running on a specific virtual box with a specific MAC address.

Amazing (not in a good way).

> Alas, I am told by my friends that DDR controllers are no joke; they are not the playground of bored SW engineers.

No, they definitely aren't funny. I've worked with FPGAs and DDR controllers at college and their can be a big PITA. Even with DDR controller libraries you can still run into all sorts of timing issues.

When a SWE really starts to understand how complex a modern DDR controller is at the silicon level, two things happen:

1. You start to wonder how very, very far your code runs from the theoretical capabilities of the hardware and you start experiencing existential doubts.

2. You are overcome by a deep and immense feeling of gratitude towards whomever managed to force all that complexity to remain hidden underneath the simple memory abstraction your rely on to write day to day code.

I try really hard to not go down that particular rabbit hole as it will always end up being 100% sure that software can't possibly work reliably or even at all. The degree to which we assume that our hardware is able to move stuff from point 'a' to point 'b' at a couple of billion times per second is scary.

Although I don't know hardware, I read lots of stuff about developing it to get a better idea. Among the more interesting reads were slides about each process shrink along with challenges they brought. Especially from beginning of Deep Sub-Micron toward 28nm.

The impression I got, esp by 28nm, is the hardware is inherently broken in quite a few ways. They have to correct the masks with algorithms, they do image recognition on circuits to spot patterns that act up, extra latches, variance across the chip/wafer, aging effects... list goes on. Miracle they even work at all.

These things are also why I only trust old nodes for security. Sort of.

Any links to reading materials for learning about that?

Found this PDF from Samsung regarding the DDR4 interface itself:


(The JEDEC DDR4 spec is $284 to download...)

Start reading data sheets of memory chips. They have state diagrams. DDR2 is great technology to start with. Some random memory chip: https://www.micron.com/-/media/client/global/documents/produ... on page 9 you have state diagram. There is simulation model available.

Where is this simulation model, and (here's hoping) is it cross-platform?

Also, is it anything like http://www.visual6502.org/JSSim/?

On Micron’s website for most of their parts. Worked very well with Alteras’s memory controller some time ago. I needed to evaluate read write throughput for specific read/write patterns before designing printed circuit boards. I don’t unterstand what you mean with cross-platform.

There's actually an open source DDR4 controller[1]. To my untrained eyes it looks quite complex. It's written in Python using Migen[2] to generate the HDL.

[1] https://github.com/enjoy-digital/litedram

[2] https://github.com/m-labs/migen

It's very easy to make a DDR interface in an FPGA, it's difficult to make it run at the mark speed...

(but this observation is generic for all FPGA-stuff really, it seems easy first, it's just another language etc.. but you need to learn timing analysis and constraints and optimization to use it for real)

The Spartan-6 has a DDR memory controller built in.

Edit: I was wrong and confused the Spartan 6 of the Pano G2 with the Spartan 3 of the G1.

Spartan-6 does not.

But Xilinx has a MIG (memory interface generator) that automatically creates the RTL for a memory controller that synthesizes to regular core logic.

In addition, it also has IO cells with posedge and negedge FFs and calibrated delay lines.

Getting the DDR DRAM to work on the Pano G2 shouldn’t be too hard. (Less hard than the G1, which has DRAM that isn’t supported by the MIG.)

Well UG388 claims that it does have a hard memory controller block.

You're right they do, just not on the lowest speed grade devices [1]

[1] https://www.xilinx.com/support/documentation/data_sheets/ds1...

I stand corrected!

I really should get the DRAM working on the G2. How hard could it be?

Famous last words. But I do hope you manage to get access to that sweet memory, it would make software life so much easier.

A couple of weeks of very hard work at a minimum :) Maybe a year or so on the outside. But please do!

How about 9 hours? :-)

Just got it up and running with a 125MHz clock and the Xilinx MIG (memory interface generator.)

Did you connect the the memory controller Xilinx MIG to the AHB interconnect of Leon?

I used a similar DDR2 memory (MT47H64M16 I think) on a Cyclone III a few year ago. Using the ALTMEMPHY from Altera it was not to difficult to make it work. Then later we got a new batch of cards witch would randomly fail. The problem was that DDR had been switched from Micron to an equivalent from Samsung. But the timing must have been just different enough to cause random fails. So yes DDR2 memories can be tricky.

Many years ago at University I used Leon and grlib for a project. It was a nice processor and IP library. I would like to use it again.

Hah! Awesome.

This article brings up an interesting question...

What other cheap hardware products contain FPGA's that are potentially user accessible?

I started a new message chain for this:

Ask HN: What other cheap hardware products contain FPGA's?


On a side-note of pedantry, we really need to stop using the term ‘compiling’ in the context of FPGAs and HDLs. To ‘compile’ is to assemble a dossier of documents and/or fill in forms - this is why Grace Hopper called her automatic code generation contraption a ‘compiler’: because, quite appropriately, it took the description of actions to be undertaken by the machine and fleshed them out in a ritualistic fashion in lower-level instructions.

HDLs and FPGAs have very different principles and objectives. The best term is ‘instantiate’, because one creates an instance of a given hardware description upon the substrate of gates provided by the array.

I’m sure I’ll be told I’m nit-picking, but those who do so would probably recoil in horror at the faux pas of some n00b saying a browser “compiles HTML” and tell them the correct term is ‘render’, and they’d be right.

Please, let’s be careful and deliberate about the terms we use, can we please?

I think you're not quite incorrect but your nitpicking isn't so much clarifying as redefining terms.

Lowering HDLs into logic that can be flushed onto LUTs does involve an intermediate compilation step, even though it does also involve placing/relocating/routing elements on the FPGA, ultimately producing a file to be flushed to the chip.

One could argue that 'compiling' is often conflated with 'linking' in producing application binaries, and in fact 'linking' also involves positioning/relocating elements in some fashion.

'compiling' is suitable and conceptually compatible enough that it clarifies rather than confuses.

It’s precisely this notion of ‘lowering’, as you call it, that preoccupies me. If unchecked, the idea that all forms of ‘lowering’ through abstraction layers is a form of ‘compilation’ (of sorts) will become the norm, and we’ll no longer have a unique term for what compilation proper actually means. And you’re entirely right in noting that nowadays ‘compiling’ has come to encompass and conflate the conceptually very distinct steps of compilation, linking, and assembly... a process I’m more comfortable with collectively referring to as ‘building’.

It’s entirely possible that I might’ve (accidentally) redefined one or more meanings, and if I have, I apologise. It’s almost four in the morning and insomnia prevents me from sleeping but doesn’t necessarily maintain me at full alertness (tomorrow/today will be hell).

The term is "synthesis." You have a synthesis tool which turns your high level (verilog or whatever) hardware description into a netlist of gates and other hardware primitives. That's followed by "implementation," where the netlist is fed through the FPGA layout planner and place-and-route tool to generate a bitstream.

Yes! ‘Synthesis’ is much better than either ‘compiling’ or ‘instantiating’. Thank-you.

Another design to target at one of these boxes could be Milkymist [1], uses the lm32 CPU and various peripheral cores.

[1] https://github.com/m-labs/milkymist

This is awesome!!! (Ultra)SPARC CPU's are a joy to code for in assembler, and modern T3, T4 and T5's are number crunching monsters.

How about synthesizing the GPL-licensed OpenSPARC T2 now?


I'd love to have SmartOS backported on a FPGA-based, OpenSPARC T2, 19" 1U rack mountable server someday. Free hardware and software all the way.

> This is awesome!!! (Ultra)SPARC CPU's are a joy to code for in assembler,

Dr Jack Whitham disagrees:

> https://www.jwhitham.org/2016/02/risc-instruction-sets-i-hav...

Of course that anyone is free to disagree for any reason, but after having written assembly for about 10 architectures, including several compilers, I definitely prefer SPARC to anything else. The current SPARC V9 ISA spec is about 1/10 of the arm64 ISA (I wrote compilers for both). The instruction encoding is so simple I can assemble in my head.

There are some spectacularly annoying things like the stack bias, but those are easy to hide in the assembler (and those are problems with the System V ABI, not SPARC, embedded ABIs ignore them).

Can you elaborate on “stack bias”? What are your thoughts about register windows? I am open mindedly curious as to why you prefer that assembly? Also, what SPARC hardware are you running?

On SPARC V9 the stack and frame pointers don't point to the top of stack, they point 2047 bytes (0x7ff) away from it. SPARC has 13-bit signed immediates, and much of that range would be wasted if the stack and frame pointers pointed to the top of the stack. Having a stack bias allows more of the immediate range to be utilized at the cost of the assembly (and runtimes) having to keep track of the stack bias.

The offset is chosen odd as to trivially identify 64-bit code in things like register window overflow trap handlers[2].

When I say I prefer some assembly vs. some other assembly, I speak from the perspective of a kernel/compiler writer. Realistically you will never use assembly to write general purpose code. You either target it from a compiler (in which case it better be easier to target), or write some specialized runtime/driver code, in which case it would better have easy to understand semantics and behavior. If you were to write general purpose code in assembly, say write assembly instead of C, an orthogonal CISC architecture would be way easier to write code for. But nobody does that anymore. People only use assembly when they have to (or target it from a compiler) and for that particular case the features of the ISA that matter most are quite different than what a general purpose programmer would want.

SPARC really excels for me because of how easy it is to target and how easy it is for me understand when I am writing runtime code.

I have various SPARC hardware, the most interesting (and the one eventually used for the Go port) being a single-digit serial number S7-2 system[3]

[1] https://docs.oracle.com/cd/E18752_01/html/816-5138/advanced-...

[2] http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/...

[3] https://www.oracle.com/servers/sparc/s7-2/

I grew up around cracking and the software piracy scene on the Commodore64 and Amiga so I write UltraSPARC assembler for fun (and profit, since I've always been able to utilize my knowledge and insights when landing job gigs). Does pouet.net tell you anything?

50% of servers in my basement data center are SPARC based.

> If you do a serious degree-level computer architecture class, it'll probably involve MIPS, mainly so that your professor can set some really tough exam questions about the pipeline.

Ha, I remember suffering through this. MIPS has a relatively pretty instruction set but it’s just such a pain to write by hand and the arbitrary choice of forcing delays and pipeline stalls on developers in the way that it did was just really disappointing…

The Leon CPU used in the article (and by ESA) is a derivative of OpenSPARC T2.

Edit: no it isn't, see below.

Leon is a 32-bit, SPARC V8 ISA implementation. It's also one of the rare radiation-hardened designs (for use in space equipment where cosmic radiation has a significant influence on correctness of operation).

An OpenSPARC T2 is a SPARC V9b+VIS ISA. All UltraSPARC processors with a SPARC V9 ISA are 64-bit.

Having written the SPARC64 compiler and runtime for Go (and the Solaris port), I assure you I know the difference between SPARC V8 and SPARC V9.

It appears that I have been mislead, back in 2005 when OpenSPARC was released somebody from Sun told me that LEON is based on OpenSPARC, and I haven't bothered to check since. It seems to be a completely different design (it even uses VHDL instead of Verilog). So I was very mistaken about LEON.

However, there have been other (embedded) SPARC V8 CPUs derived from OpenSPARC. I have worked on one myself (on the software side). This one was a very strange one because it only had two register windows (and the ABI didn't use register windows) and they disabled SMT.

This type of modification is extremely easy to do on SPARC because of the way 64-bit works. There are no separate execution modes, there's simply another set of flags that's affected by instructions. The conditional instructions chose which set of flags to use.

I really like SPARC, it's my favorite architecture.

The LEON3 CPU used in the article is a 32-bit SPARCv8, it isn't derived from the T2. You can see the 32-bit addresses in the Hello World example.

See my parallel reply.

Hmm, yeah, that does look affordable. Lots of ads on ebay.

"Lot of 25 Pano Logic Thin / Zero Desktop Client Black w/ Power Supply

Buy now: US $170.00"

I wonder what the thinking was that lead Pano Logic to put expensive FPGAs inside these units, instead of some more typical cheap ARM SoC?

Edit: Ah, they operated in 2006-2012. I guess that was just before the rise of the very cheap/fast SoCs.

Exactly. These are far far away from cheap FPGAs, Spartan 6 chips are just old, but not cheap. When thinking about cheap FPGAs I imagine these cheap things from Lattice for interface bridging.

Cheap ARM SoCs were on ARM’s roadmap back then. Cortex A9 was huge innovation these days.

I guess they were thinking they'd be using very expensive FPGAs in the the first VC-funded iterations, and then move on to something cheaper. But they never got there...


"... the company had doubled or tripled its revenue every year since 2008, when it had about $1 million in sales.


The company was backed by about $38 million in funding from investors including Goldman Sachs, ComVentures, Foundation Capital, Fuse Capital, and Mayfield Fund."

I think you are correct.

Fpgas are often used as a stepping stone towards cheaper asic or hybrid solutions. asic requires a high volume and a working design, neither of which you have early in the project.

>I wonder what the thinking was that lead Pano Logic to put expensive FPGAs

Cloud Computing got really hot in 2006, it was perfect time to milk suckers. https://www.crunchbase.com/organization/pano-logic mo was finding bigger idiot before running out of coal.

Reminds me of another 'we will make it up in volume' dotcom scam I-Opener https://en.wikipedia.org/wiki/I-Opener selling ~$500 worth of computer and LCD monitor at $99

Reminds me of the nVidia GSync module. Those used FPGAs which costed about 250 $/1ku iirc and 768 MB (1st gen) / 3 GB (2nd gen) of memory. Even at the hefty pricing of GSync screens it probably only covered BOM cost.

(The large amount of memory is not because they need so much memory, but because they needed the bandwidth of a 3x 32 bit bus)

Well, for comparison, the cost of a current 100K LUT FPGA on a board is:

$250 for Xilinx Artix-7:


$100 for Lattice ECP5 (85K LUT):


Also Lattice only does LUT4. (Their internal look up tables have 4 inputs) The Artix-7 has LUT6. From what i heard that means you need more than 1.6 times more LUT4 than LUT6. Not including the fact that Artix-7 includes other hardware features which are sometimes wired in to a desgin

Note that you can't easily compare LUT count between vendors or even among a vendors different architectures.

In any case, I think ecp5 is partially supported by open source tools which should make it much more interesting to hobbyists.

I haven't played with mine yet, but there's an open toolchain for the ECP5 :)

This use of expensive surplus hw reminds me of the nsa@home project


In one of the bash scripts in this article there's a lot of " || exit 1", you can enable this automatically by using "set -e" at the beginning of the script, this works for POSIX shell too. You can get extra safety in bash specifically by doing something like "set -euo pipefail" that will exit on errors, including pipeline failures and also on undefined variables.

Good idea -- here is an explanation: http://redsymbol.net/articles/unofficial-bash-strict-mode/

This will make the program instantly unportable, because not all Bourne shells are POSIX-compliant. Please don't do that. || exit 1 is portable.

Where are we going to come across a Bourne shell that doesn't like `set -e`? I accept that this choice might have been dubious 20 years ago, but we don't need to support SunOS 4.1.4 or SCO anymore...

We're talking about standards that were published in 1992-1994 and that the overwhelming mass of the industry followed relatively quickly. To hold back 25 years later is nuts.

I see this crop up as "wisdom", for the sake of some quality, like that quality is an absolute. What the parent is literally advocating is that we don't ever fix the mistakes of the past.

If you are generating a script that has to run everywhere, yes have the backend emit the verbose or tricky or unergonomic construct. But most of time, breaking portability to have something that is easier to understand, or more regular, that uses modern facilities is almost always the correct solution.

So, I'm usually the curmudgeon that advocates for portability, slowness of change, etc. Only in the past couple years did I move from C99 to really employ C11/C18 features in most code. This, in particular, though, I think has been safe for a couple decades.

What's "everywhere", anyways? We'll have a hard time even finding a system this won't work on.

Especially in the context of FPGA tools that will only run on modern Linux, anyways...

> Especially in the context of FPGA tools that will only run on modern Linux, anyways...

Right, annatar doesn't actually care that his "advice" here is completely wrong and irrelevant. His only goal is to derail any useful discussion in order to point out how smart he is and how terrible any software that isn't included with solaris 8 is.

Lets be careful to no blame a specific person but recognize the general argument. There is a limitless supply of folks caught in the same loop.

It really looks like this guy is just a troll at this point.

I have sympathies for the arguments. It's kinda unfortunate that Linux has taken over everything and we don't have real portable code anymore and other Unixes find themselves on the margins. x86, even x86-64 isn't really "clean". Way too much of the software ecosystem is willing to be on the bleeding edge. I worry we're losing core bits of Unix philosophy / simplicity as things evolve. Etc.

If the arguments were completely untrue, they'd not find any purchase. But this guy is running around kicking the hornet's nest whenever possible.

I am kicking the hornets' nest because IT is in a really bad shape because of the GNU/Linux monoculture and I want for things not only to change, but to effect change. GNU/Linux got to where it is by users bitching about how great it was for two decades, so the theory goes it should work for anything else, especially where that something else is superior.

And also, I have no fear of hornets or their sting; I've been stung enough times. I have never cowered and I'm not about to start now.

To add insult to injury, I am forced to work deep in the bowels of Linux and various GNU tools like GCC because of the current market conditions, so I get to "live the dream" every day. I want things to change, and I'm doing something about it, the best way I know how, which is by raising awareness, just like all the Linux people did back in the '90's and early 2000's.

You're convincing people who would be otherwise sympathetic that your arguments have no merit, by spouting wrongness ("set -e" is a bashism!! Even though it appears in V7 unix and 3BSD!!) You can't even admit your mistake.

I've loved Solaris, NetBSD, and Irix. ZFS is pretty cool.

But it'd be pretty easy based on your posts to conclude that the only people who still advocate for that stuff are wrongheaded, pedantic curmudgeons.

I never wrote set -e is bashism. I implied that people who use set -e will also write bashisms. Goodness knows I've had to fix enough of such garbage over the years to work with Solaris' /bin/sh and I'm revolted, sick and tired of it.

In the meanwhile, notice how absolutely no one could explain what's wrong with || exit 1, other than it's "ugly" (which is a personal opinion, not a fact)?

"You can't even admit your mistake."

So let me get this straight: because I was wrong about set -e making one's program instantly unportable, my entire argument that SmartOS is a good, fast product is invalid? I guess the implication here is that because I was wrong about set -e making a shell program unportable, I am wrong about everything else, is that it?

I haven't been talking about SmartOS. This thread isn't about SmartOS-- it's actually about FPGA synthesis tools that won't run on SmartOS, and criticism of how someone got those tools to run. In this thread you are batting .000-- the whole thing you showed up to assert was wronk, and then the whole sideshow of a conversation you tried to start is bad.

set -e is a stylistic choice, though one I'd urge anyone writing scripts in Bourne and Bourne-like shells to prefer by default.

And I stand by my statement to not use set -e because || exit 1 is a safe bet to be portable. This set -e is just nonsense.

The sad thing is I generally support things like SmartOS. I have a SmartOS box at home. It's just that anytime SmartOS or anything like docker comes up on HN, annatar comes out of the woodwork to complain how shitty linux is without providing anything of value.

A while back I even tried to encourage him to explain HOW zones are better with real examples instead of just shitting all over a how-to mentioning docker. He instead attacked me and said it wasn't his job to teach anyone anything. Awesome job, the only thing anyone learned from that thread was that SmartOS users are assholes.

What's left of solaris is dying and he is helping in every way he can.

"He instead attacked me and said it wasn't his job to teach anyone anything."

Sad to read this after having taught so many information technology professionals over the three decades, some of them in the heart of Silicon Valley. What I probably told you is that I don't owe you anything, which is still true. You disrespected me and attacked me several times. Why should I teach you?

As for the "what's left of Solaris is dying" "argument", I would be careful to make such statements, as they reek of casual usage: SmartOS isn't Linux; one deploys it because one has understood his advanced capabilities and features Linux doesn't have, to do a job and do it reliably and easier than with Linux, not because it's a good old buddies club where we comb our Ken and Barbie dolls and drink-pretend to have tea at 4 o' clock. If you want an echo chamber, stick with Linux; if you need to do a job, read the SmartOS manual pages, then come back with concrete questions. I'm not about fanboyism and casual usage, "community" and all that "Stack Overflow" nonsense.

Now, with all of that out of the way: what do you need help with in SmartOS?

It is dying, though. No one wants to be beholden to Oracle.

Outside Oracle, the various efforts to use OpenSolaris derived stuff are constantly fragmented and overall shrinking in size. What little effort there is to improve things is often duplicative between the three+ branches of effort. Hardware support is more marginal than ever on OpenSolaris derived stuff.

Less and less important software compiles under and works well under it, and it's getting harder and harder to run.

"What little effort there is to improve things is often duplicative between the three+ branches of effort."

That's funny, because I'm on the mailing lists for illumos and SmartOS and I see that everything generally useful impemented in SmartOS ends up upstream in illumos. Do you have any evidence to support your claim?

It's true that less and less software compiles on Solaris 10. I have to spend time patching badly written software to get it to compile. I have however not noticed any performance impact, on the contrary, that same software runs faster on Solaris 10 (and by extension illumos and therefore SmartOS) than on GNU/Linux where it has been developed, which is a slap in the face of GNU/Linux crowd hacking on that trash fire.

SmartOS is a different story, completely different: they use pkgsrc and their library is 15,000+ packages. So the argument here is mis-information. Presumably, this is being done on purpose, "because Linux"?

Take a moment to view his comment history.

You may need 'showdead' on.

The discussion was just fine until someone suggested set -e claiming that || exit 1 is "harder to understand" and "ugly".

It was just fine until you imagined `set -e` as something unportable and a bashism, bro.

set -e is not a bashism, but people who reach for it are extremely likely to write bash-only code. That's just been my experience.

And I'm not your "bro".

Cute backtracking. :) It's fun to watch you squirm and try and come up with a narrative where the laughable things you've said are justified.

I've learnt a lot from you. The original Bourne shell didn't support set -e. set -e is a bashism. Depending on a posix shell is unreasonable. This has something to do with Linux.

Wait, none of these things are true.

> because on Solaris /bin/sh is the real McCoy - Steven Bourne's shell from 1972 - 1977.

vs. v7 Linux from 1977-1978 including set -e. You're hilarious, bro.

For all I know, you could have made all that shit up. I see no reason to believe you or trust what you wrote there. I know I got busted by set -e or else I wouldn't have brought it up. It's that simple and no amount of trolling me will change that. I don't have Alzheimer's just yet.

And I'm not your "bro". Are you in need of an older brother so badly?

It's readily verified. I even pasted the original code, bro. Or you can login to a unixv7 system here, in browser: https://unix50.org/ Takes 30 seconds..

  Disabling XQ
  : hp(0,0)unix
  mem = 2020544
  WED DEC 31 19:15:05 EST 1969
  login: root
  You have mail.
  # set -e
  # true
  # false

I'm not your "bro", as much as you'd like me to be.

How is || exit 1 harder to understand?

With the exception of GNU autotools, I know of no backends which would emit shell programs. People write programs in shells.

I used that as an example of when using the portable idiom makes the most sense and not something specific wrt bash.

But in this case I know of





Autotools is written in M4.. M3/M4 were developed by K&R to help automate generating mostly shell programs. In 1974-1977.

For someone who purports to be so oldschool, you're sure missing a lot.

Please take a moment to pause and consider what you're writing. Someone here wrote "let the backend generate correct code". I responded that people usually write shell programs, not backends.

You wrote that you know of no backends--- but we have a long history of macro preprocessing/expansion of shell scripts.

M4 would excel at this adaptation...

Which isn't even necessary, because everything posix conformant has "set -e". And if "posix" isn't good enough, the Bourne Shell back to 1978 has it (probably earlier).

You are hilarious, bro. :D

M4 is a generic preprocessor, not a backend shell program compiler.

And I'm not your bro.

Imagine a troll that acts like someone living in 1992 who only ever posts about how linux is a toy and solaris is a "real unix". That's annatar.

Which part of the fact that Linux is a toy bothers you, and why does it bother you? Is Linux a religion for you?

Presumably, the part where you push your opinion as fact to other people and do so in such a way that people are willing to label you as an outright troll.

You mean the same thing as the Linux / Docker / bash echo chamber on this site? Hm. Seems the good folk don't like a dose of their own medicine.

Which operating systems currently ship Bourne shells that aren't POSIX compliant?

I know this was a thing 20-25 years ago, but is it still a problem now?

So I guess I'm just used to POSIX compatible shells, it would be good to know about Bourne shells that aren't POSIX compatible (genuine question). But the shebang at the top of the script is bash so what I said is still correct in the context of the article. Stuff like multiple "|| exit 1"s makes people think that shell scripting is ugly and antiquated.

I know that "#!/bin/sh" and "set -e" will work on all the BSDs, Solaris, Linux, macOS, what Unix is realistically left?

Hi Rory. I am well aware of the flags you described - and indeed, they'd clean things up here. I am in fact tempted to go back and update the post - and either replace the shebang, or add the "early abort" flags.

But TBH, keeping that script (that I wrote in 60 seconds) nice and clean was the least of my worries... Making the damn toolchain work had far, far higher priority :-)

And after that, booting the LEON cores of course.

Yeah of course. I didn't mean to take anything away from the awesome work you are doing, this was nitpicking really, the script as it is, will work as intended, was really just highlighting this for the sake of others who may not be aware. Keep up the good work!

It doesn't matter what people think, what matters is that the code is portable. Isn't it better that the program JustWork(SM) unmodified on all platforms without having to worry about whether the shell is POSIX-compliant?

Up until the latest Solaris 10 patches, /bin/sh and /sbin/sh did not implement set -e, because on Solaris /bin/sh is the real McCoy - Steven Bourne's shell from 1972 - 1977. There is a fully POSIX compliant /usr/xpg4/bin/sh, but only Solaris experts know that the /usr/xpg4 directory exists and what is in it; 99.99% of the people out there won't have it before /bin or /usr/bin in their PATH.

Portability should never be traded for convenience, because when one does that, one forces "the next guy" to waste his time fixing one's code. That's just wrong. I've had so much of my life wasted by GNU/Linux "bashisms" which were completely unnecessary. I resent that deeply. That's my life I could have spent in more productive and fulfilling ways, rather than fixing what should not have been broken to begin with.

If you want to write portable shell programs, write them in ksh and then you won't have to worry about POSIX or non-POSIX.

> Up until the latest Solaris 10 patches, /bin/sh and /sbin/sh did not implement set -e, because on Solaris /bin/sh is the real McCoy - Steven Bourne's shell from 1972 - 1977.

There's been a conformant POSIX shell on Solaris since 1995.

/bin/sh has been conformant to this since 2007 or so.

It is 2019. When should we be allowed to use it?

POSIX is the standard for Unix portability and has been for the last 2 decades. System V never was.

> Portability should never be traded for convenience, because when one does that, one forces "the next guy" to waste his time fixing one's code. That's just wrong. I've had so much of my life wasted by GNU/Linux "bashisms"

"set -e" is not a bashism. The errflag was there from nearly the beginning. By the early 1979 BSD 3 development tree, it could be triggered by 'set -e'. When the POSIX standards committee decided to iron out differences between the BSD and SystemV branches of the Unix Family tree, they decided it was worth keeping, and over the next few years everyone conformed.

> Portability should never be traded for convenience, because when one does that, one forces "the next guy" to waste his time fixing one's code. That's just wrong. I've had so much of my life wasted by GNU/Linux "bashisms" which were completely unnecessary. I resent that deeply. That's my life I could have spent in more productive and fulfilling ways, rather than fixing what should not have been broken to begin with.

There's also things that are good for everyone else's productivity-- by e.g. not catering to someone who is stuck 18-40 years in the past.

> because on Solaris /bin/sh is the real McCoy - Steven Bourne's shell from 1972 - 1977

Wow-- I hadn't realized just how wrong your argument was! Bourne's shell from the outset supported set -e!

From Bourne's "An Introduction to the Unix Shell", ~1977-1978 (converted to HTML at http://porkmail.org/era/unix/shell.html )

> The shell flag -e causes the shell to terminate if any error is detected.

and the document goes on to state that you can use 'set' to set shell flags.

It's in the V7 source, too:

  CHAR    flagchar[] = {
        'x',    'n',    'v',    't',    's',    'i',    'e',    'r',    'k',    'u',    0
  INT     flagval[]  = {
          execpr, noexec, readpr, oneflg, stdflg, intflg, errflg, rshflg, 
  keyflg, setflg, 0
Along with a proper call to options() in the set builtin.

If the basis of your whole argument is history, you're complaining about something that would have worked on V7 or 3BSD. :D Apparently they are "too new."

edit: Running V7 on a PDP-11 emulator has the anticipated results:

  : hp(0,0)unix
  mem = 2020544
  WED DEC 31 19:15:00 EST 1969
  login: root
  You have mail.
  # set -e
  # false

>Up until the latest Solaris 10 patches

You speak in absolutes.

Very dated absolutes as well.

You might want to think about that.

Solaris 10 release date: January 2005.

Turns out they're absolutely wrong absolutes, too, since Bourne's shell, as it shipped in V7, supported set -e.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact