Simple Memory Model
The RISC-V address space is byte addressed and little-endian. Even though most other ISAs such as x86 and ARM have severalpotentially complex addressing modes, RISC-V only uses base+offset addressing with a 12-bit immediate to simplify load-store units.
Looks like the RISC-V is taking a page from this strategy book.
 was it Gabriel's Worse Is Better essay? https://www.dreamsongs.com/RiseOfWorseIsBetter.html
That its predecessor compiled on an EDSAC, it compiled on a PDP-11, and was the language of UNIX (its killer app) was why it spread everywhere. Network/legacy effects took over from there. There were languages like Modula-2 that were safer, efficient, easy to implement, and still close to the metal. Such languages kept being ignored and mostly are to this day for system programming. They even ignore the ones that are basically C with enhanced safety (eg Cyclone, Popcorn).
It's all social and economic reasons justifying C both at its creation, during its spread, and for its continuing use. The technical case against such a language has always been there and field-proven many times.
1. Languages designed to be high-level, readable, optionally easy to compile, work with large programs, have safety, and compile to efficient code.
2. Two languages, BCPL and C, that strip as much of that as possible to compile on an EDSAC and PDP-11 respectively.
Thompson and Ritchie along with the masses went with No 2 while a number of groups with minimal resources went with No 1 with better outcomes. At each step of the way, groups in No 1 were producing better languages with more robust apps. Yet, Thompson et al continue to invest in improving No 2. By present day, most fans of No 2 forget No 1 existed, push No 2 as if it was designed like No 1, and praise what No 2 brought us whereas it actually hampered investments No 1 enabled easily.
Compiler, HW, and OS people should've invested in a small language from No 1 category as Wirth's people did. That Pascal/P got ported to 70 architectures, some 8-bit, tells me No 1 category wasn't too inefficient or difficult either. Just gotta avoid heavyweights like PL/1.
My first compiler was Modula-2 on my Atari ST. But it was difficult to do much in it because so much of the OS documentation and example code was geared towards C. Also compiling on a floppy based system (couldn't afford a hard disk) was terrible.
Funny thing is, Oberon family was used to build both apps and whole operating systems. Whereas, Thompson et al's version is merely an application language lambasted for comparisons with system languages. I don't know if they built Oberon up thanks to tooling and such or if they've dropped back down a notch from Component Pascal since it's less versatile. Just can't decide.
Note: Imagine where we'd be if they figured that shit out early on like the others did. We'd be arguing why an ALGOL68 language wasn't good enough vs ML, Eiffel DbC, LISP's macros, and so on. The eventual compromise would've been better than C, Go, or ALGOL68. Maybe.
Want a drop down menu ? : make a file in the current directory with the name of the menu, put the commands in it you want in the menu. Done. And that can go as deep as you like.
Those commands can include snippets of shell code that act on the current document e.g. | sed s/[a-z][A-Z]/g
Highlight some text, middle click that command and the command runs on the selected text.
Add to that a file system for the text editor :
date > /n/acme/new
execute that and you get a new document containing the date
When using it, it feels limitless. You build a command set that matches your current project. There's a pattern matcher too: the plumber, so you can click somefile.c:74 and the editor opens that file at that line. so "grep -n someregex ???.c" and you get a list of clickable "links" to those files.
When you browse the plan9 source code you might see the unusual
I'll stop there, and I've only scratched the surface
??? means * - you can't type * . c without spaces in HN !
Nonetheless, I'm hesitant to put a execution system in a text editor. One of my favorite aspects of them is that they load, edit, and/or render data. The data stays data. Means throwing one in a sandbox was always safest route to inspect files I wasn't sure about. An Oberon-style text editor that links in other apps functionality might be a nightmare to try to protect. It's why I rejected that style of GUI in the past.
Possibly a nightmare for someone else to reason about, so I accept that aspect. But every long time plan9 user I know (and that's about 20 I know by name to their face + more I meet at conferences) finds going back to plain old unix a retrograde step. It's like going back to a 16" b&w tv.
Has there ever been an efficient, portable implementation of Cyclone or Popcorn? Does anyone seriously consider either to be easy to implement?
And, if you just randomly picked three examples, why are they all bad? What does that mean about the other examples, statistically?
Three people wrote a consistent, safer platform in it from OS to compiler to apps in 2 years. Amateurs repeated did stuff like that with it and its successors like Oberon for another decade or two. How long did the first UNIX take to get written and reliable in C?
"Has there ever been an efficient, portable implementation of Cyclone or Popcorn? "
I could've said that about C in its early years given it wasn't intended to be portable. Turns out that stripping almost every good feature out of a language until it's a few primitives makes it portable by default.
"And, if you just randomly picked three examples, why are they all bad? What does that mean about the other examples, statistically?"
It means that, statistically, the other examples will have most of C's efficiency with little to none of its weaknesses. People using them will get hacked or loose work less. Like with all safe, systems languages that people turned down over C. Least important, it means I picked the examples closest to C in features and efficiency because C developers' culture rejects anything too different. That includes better options I often reference.
Why is that bad for their ability to produce robust programs? And what does that mean about C developers statistically?
In particular, I assert that Unix was considerably bigger and harder to write than those OSes written with Modula-2, in the same way that Linux quickly became much bigger than Minix. That "bigger" and "harder" isn't the result of bad technique or bad languages - it's the result of making something that actually does enough that people care to use it.
Next assertion: C makes some parts easy that are hard in Modula-2. That makes it more likely that the harder parts actually get written, that is, that the OS becomes usable instead of just a toy. (True, C also makes it easier to wrote certain kinds of bugs. In practice, though, the real, usable stuff gets written in C, and not in Modula-2. Why? It's not just because everybody's too stupid to see through the existing trends and switch to a real language. It's because, after all the tradeoffs are weighed, C makes it easier to do real work, warts and all.)
As for why "the real, usable stuff" gets written in C? Because C is the only first class programming language in the operating system.
And of course 20 years ago "the real, usable stuff" was actually written in assembly because C was slow and inefficient. Or Fortran if you needed to do any math more complicated than middle school algebra.
The result, though, was that Unix became available for the cost of distribution plus porting, and it was portable. It was the easy path for a ready-for-real-work non-toy operating system on new hardware.
It's programming, science, all of it. You identify the key goals. A good chunk of it was known as Burroughs stayed implementing it in their OS's. Their sales numbers indicated people wanted to use those enough they paid millions for them. ;) Once you have goals, you derive the language, tools, whatever to achieve those goals. Thompson failed to do this unless he only cared about easy compilation and raw speed at the expense of everything else. Meanwhile, Burroughs, Wirth, Hansen, the Ada people, Eiffel later... all sorts of people did come up with decent, balanced solutions. Some, like the Oberons or Component Pascal, were very efficient, easy to compile, easy to read, stopped many problems, and allowed low-level stuff where needed. Came straight from strengths of design they imitated. A form of that would be easy to pull off on a PDP as Hansen showed in an extreme way.
C's problems, which contributed to UNIX Hater's Handbook and many data losses, came straight from the lack of design in its predecessors which soley existed to work on shit hardware. They tweaked that to work on other shit hardware. They wrote an OS in it. Hardware got better but language key problems remained. Whether we use it or not, we don't have to pretend those effects were necessary or good design decisions. Compare ALGOL68 or Oberon to BCPL to see which looks more thought out if you're still doubting.
Are you referring to C here, or to B? If C, I'd like to see your source. If B, that's a highly misleading statement. You're attributing to C failings that belong to another language, and which C was designed to fix those failings.
> Now, what would UNIX look like if they subsetted and streamlined a language like ALGOL68 or Modula-2 that was actually designed to get real shit done and robustly?
But they weren't. I mean, yes, those languages were designed to get stuff done, and robustly, but in practice they were much worse at actually getting stuff done than C was, especially at the level of writing an OS. Sure, you can try to do that with Algol. It's like picking your nose with boxing gloves on, though. (The robust part I will give you.)
> Their sales numbers indicated people wanted to use those enough they paid millions for them. ;)
But, see, this perfect, wonderful thing that I can't afford is not better than this piece of crap that crashes once in a while, but that I can actually afford to buy. So what actually led to widely-used computers was C and Unix, and then assembly and CP/M, and assembly and DOS.
> Once you have goals, you derive the language, tools, whatever to achieve those goals. Thompson failed to do this unless he only cared about easy compilation and raw speed at the expense of everything else.
Nope. You don't understand his goals, though, because they aren't the same as your own. So you assume (rather arrogantly) that he was either stupid or incompetent. (The computing history that you're so fond of pointing to could help you here.)
My side of this discussion keeps saying C's design is bad because it avoided good attributes of (insert prior art here) and has no better design because it's foundations effectively weren't designed. The counters, from two or three of you, have been that the specific instances of the prior art were unsuitable for the project due to specific flaws, some you mention and some not. I countered that error by pointing out C's prior art, BCPL to B to original C, had its own flaws. Rather than throw it out entirely, as you all are saying for C alternatives, they just fixed the flaws of its predecessors to turn them into what they needed. The same thing we're saying they should've done with the alternatives.
So, you on one hand talk like we had to use Modula-2 and the others as is or else impossible to use it. Then, on the other, justify that prior work had to be modified to become something usable. It's a double standard that's not justified. If they could modify & improve BCPL family, they could've done it with the others. The results would've been better.
"The robust part I will give you."
As I've given you speed, ease of porting, and working best in HW constraints. At least we're both trying to be fair here. :)
"but that I can actually afford to buy. So what actually led to widely-used computers was C and Unix, and then assembly and CP/M, and assembly and DOS."
It did lead to PL/M that CP/M was written in. And to the Ceres workstations that ETH Zurich used in production. And A2 Oberon system that I found quite useful and faster than Linux in recent tests despite almost no optimization. Had almost no labor vs UNIX and its basic tools. I imagine data, memory, and interface checks in a micro-ALGOL would've done them well, too.
"Nope. You don't understand his goals, though, because they aren't the same as your own. "
That's possible. I think it's more likely they were very similar to my own as security and robustness were a later focus. I started wanting a reliable, fast, hacker-friendly OS and language for awesome programming results. Started noticing other languages and platforms with a tiny fraction of the investment killed UNIX/C in various metrics or capabilities with various tradeoffs. Started exploring while pursuing INFOSEC & high assurance systems independently of that. Eventually saw connection between how things were expressed and what results came from them. Found empirical evidence in papers and field backing some of that. Ideas you see here regularly started emerging and solidifying.
No, I think he's a very smart guy who made many solid achievements and contributions to IT via his MULTICS, UNIX, and Plan 9 work. UNIX has its own beauty in many ways. C a little, too. Especially, when I look at them as an adaptation to survive in specific constraints (eg PDP's) using specific tech (eg BCPL, MULTICS) he learned before. Thing is, my mental view of history doesn't begin or end at that moment. So, I can detach myself to see what foolish choices in specific areas were made by a smart guy without really thinking negative of him outside of that. And remember that we're focusing on those specific topics right now. Makes it appear I'm 100% anti-Thompson, anti-C, or anti-UNIX rather than against them in certain contexts or conditions while thinking better approaches were immediately apparent but ignored.
"The computing history that you're so fond of pointing to could help you here."
I've looked at it. A ton of systems were more secure or robust at language level before INFOSEC was a big consideration. A number of creations like QNX and MINIX 3 achieved low fault status fast while UNIX took forever due to bad architecture. Oberon Systems were more consistent, easier understanding, faster compilation, and eventually included a GC. NextStep & SGI taught it lessons for desktops and graphics. BeOS, like Concurrent Pascal before it, built into OS a consistent, good way of handling concurrency to have great performance in that area. System/38 was more future proof plus object-driven. VMS beat it for cross-language design, clustering, and right functions in OS (eg distributed locking). LISP machines were more hacker-friendly with easy modifications & inspections even to running software w/ same language from apps to OS. And so on.
The prior history gave them stuff to work with to do better. Hence, me accusing them. Most of the above are lessons learned over time building on aspects of prior history plus just being clever that show what would've happened if they made different decisions. If not before, at least after the techs showed superiority we should've seen more imitation than we did. Instead, almost outright rejection of all that with entrenched dedication to UNIX style, bad design elements, and C language. That's cultural, not technical, decision-making that led to all related problems.
No, my counter in the specific bit that you are replying to here is that your history is wrong. Specifically, you said that C was initially so bad that they couldn't even write Unix in it. That statement is historically false - except if you're calling BCPL and B as "part of C" in some sense, which, given your further comments, makes at least some sense, though I still think it's wrong.
I'm not familiar enough with Modula or Oberon to comment intelligently on them. My reference point is Pascal, which I have actually used professionally for low-level work. I'm presuming that Modula and Oberon and that "type" of languages are similar (perhaps somewhat like you lumping BCPL and C together). But I found it miserable to use such a language. It can protect you from making mistakes, but it gets in your way even when you're not making mistakes. I would guess that I could write the same code 50% to 100% faster in C than in Pascal. (Also, the short-circuit logical operators in C were vastly superior to anything Pascal had).
So that's anecdote rather than data, but it's the direction of my argument - that the "protect you from doing anything wrong" approach is mistaken as an overall direction. It doesn't need for later practitioners to re-discover it, it needs to die in a fire...
... until you're trying to build something secure, or safety-critical, and then, while painful to use, it still may be the right answer.
And I'm sure you could at least argue that writing an OS or a network-facing application is (at least now) a security situation.
My position, though, is that these "safety-first" languages make everything slower and more expensive to write. There are places where that's appropriate, but if they had been used - if C hadn't won - we would be, I estimate, ten years further behind today in terms of what software had already been written, and in terms of general availability of computers to the population. The price of that has been 40 years of crashes and fighting against security issues. But I can't say that it was the wrong choice.
Well, if you watched the Vimeo video, he looks at early references and compares side-by-side C with its ancestors. A lot of early C is about the same as BCPL & its squeezed version B. The first paper acted like they created C philosophy and design out of thin air based on B w/ no mention of BCPL. Already linked to it in another comment. Fortunately for you, I found my original source for the failed C attempt at UNIX which doesn't require a video & side-steps the BCPL/B issues:
You'll see in that description that the B -> Standard C transition took many intermediate forms. There were several versions of C before the final one. They were simultaneously writing UNIX in assembly, improving their BCPL variant, and trying to write UNIX in intermediate languages derived from it. They kept failing to do so. Ritchie specifically mentions an "embryonic" and "neonatal" C followed by this key statement:
"The language and compiler were strong enough to permit us to rewrite the Unix kernel for the PDP-11 in C during the summer of that year. (Thompson had made a brief attempt to produce a system coded in an early version of C—before structures—in 1972, but gave up the effort.)" (Ritchie)
So, it's a historical fact that there were several versions of C, Thompson failed to rewrite UNIX in at least one, and adding structs let them complete the rewrite. That's ignoring BCPL and B entirely. That they just produced a complete C magically from BCPL or B then wrote UNIX is part of C's proponents revisionist history. Reality is they iterated it with numerous failures. Which is normal for science/engineering and not one of my gripes with C. Just got to keep them honest. ;)
" I would guess that I could write the same code 50% to 100% faster in C than in Pascal. (Also, the short-circuit logical operators in C were vastly superior to anything Pascal had)."
Hmm. You may have hit sore spots in the language with your projects or maybe it was just Pascal. Ada would've been worse. ;) The languages like Modula-3, Component Pascal, and recently Go [but not Ada] are usually faster to code in than C. The reasons that keep turning up are straight forward: design to compile fast to maximize flow; default type-safety reduces hard-to-debug problems in modules; often less interface-level problems across modules or during integrations of 3rd party libraries. This is why what few empirical work I read comparing C, C++, and Ada kept showing C behind in productivity & with 2x the defects. Far as low level, the common trick was wrapping unsafe stuff in a module behind safe, simple interfaces. Then, use it as usual but be careful.
". until you're trying to build something secure, or safety-critical, and then, while painful to use, it still may be the right answer."
Not really. Legacy software is the counterpoint: much stuff people build sticks around to become a maintenance problem. These languages are easier to maintain due to type protections countering common issues in maintenance mode. Ada is strongest there. The simpler ones are between Ada and C in catching issues but allow rapid prototyping due to less debugging and fast compiles. So, reasons exist to use them outside safety-critical.
"My position, though, is that these "safety-first" languages make everything slower and more expensive to write. "
In mine, they're faster and less expensive to write but more expensive to run at same speed if that's possible at all. Different, anecdotal experiences I guess. ;)
I take it you didn't link to the OS because it was only ever a toy, correct? If it had been useful for anything, it would be worth linking to -- otherwise, you're better off if I know less about it.
Lilith wasn't popular because it didn't solve any problems that needed to be solved -- it was too slow to be useful as a microcomputer OS, and it wasn't designed for that anyway. Microcomputer OS's of the time were written in assembly, and assembly continued to be important until around the time of Windows 3.1. Not integrating well with assembly is an anti-feature; the fact that it would become unnecessary some 15 years later is irrelevant. C did the job that needed to be done; Modula-2 did the job that somebody thought was cool. Also, you have yet to give me a reason to believe that Lilith was somehow safer or better-designed than UNIX of the time, considering that security wasn't even really a thing in 1980.
That's not to mention it's [Pascal family] just poorly designed from a usability standpoint, with the language creators doing silly things like removing "for" loops from Oberon (a decision which both Clojure and Rust eventually admitted was bad). Lilith itself was "succeeded" by an OS that was written in an entirely different language and made no attempt to be backwards compatible (but portability is a red herring amirite?).
>I could've said that about C in its early years given it wasn't intended to be portable. Turns out that stripping almost every good feature out of a language until it's a few primitives makes it portable by default.
I guess it's not a big surprise then that C wasn't popular in its early years? The implementations of Cyclone and Popcorn were never even complete enough to write software on normal operating systems, much less to write UNIX.
>It means that, statistically, the other examples will have most of C's efficiency with little to none of its weaknesses
It means that, statistically, they will all have other huge, glaring weaknesses that you pretend don't exist...
>Least important, it means I picked the examples closest to C in features and efficiency because C developers' culture rejects anything too different. That includes better options I often reference.
If you have better options, why didn't you name them? Because they're equally bad if you spend even a few minutes thinking about it? (Lemme guess: Limbo, SML, Ada)
The answer to all that is that my response to you was less thorough and referenced on purpose. That's due to the trolling style of your first comment that this one matches albeit with more information. A comment more about dismissal and challenge with little informational content didn't deserve a high-information response. My observation and recommendation is here though:
That is, there were specific languages in existence from PL family, Pascal tradition, ALGOL's of Burroughs, ALGOL68, and so on that balanced many attributes. A number, from safety-checks to stronger typing to interface protections, had already been proven to prevent problems or provide benefits. Thompson would've known that given MULTICS was one of systems that provided evidence of that even if it failed in other ways. He even re-introduced some problems that PL/0 prevented. Let's focus on fundamental failure, though.
So, if I were him, I'd note that the consensus of builders of high-reliability systems and CompSci researchers is we need a language like ALGOL68 that balances various needs. I'd still need as much of that as possible on my crappy PDP. So, I'd subset a prior, safe language then make it compile with tight integration to the hardware due to resource constraints. Might even made similar tradeoffs they did although not in a permanent way. If safety checks wouldn't work yet, I'd turn off as many as I needed to get it to run. As hardware improved, I could turn more on. I'd keep well-written language definition and grammar as others showed to do. I might also borrow some syntax, macro's or easy parsing from LISP to make my job easier as Kay is doing and numerous Scheme teams did at the time. Keep language imperative and low-level, though.
Thompson had a different idea. There was a language that had no good qualities except raw performance on crap hardware. Was opposite of its authors design intent to top it off. Thompson decided on it. Any gripes you have with Modula-2, Cyclone, Popcorn, etc apply at this point because the language, esp his B variant, wasn't so good enough for about any job. As I'm advising for safe languages, it would have to be modified to get stuff done. The first version of C was still crap to the point UNIX was mostly assembly. They added structs to get information hiding/organizing then were able to write UNIX in C. Both C and UNIX were simple enough to be portable as a side effect. Rest is history.
Almost any gripe you've had against safer languages of the time I can apply to C's predecessors other than compilation difficulty. That you will dismiss them on grounds that they need modification for the job at hand, but don't do that for C, shows the bias and flaws of your arguments. I've called out everyone from McCarthy to Wirth to Thompson for stuff I thought were bad ideas. For loop removal is a good example. I didn't settle against C until years of watching people's software burn over stuff that was impossible or improbable in better designed, even lean, languages. Evidence came in even in empirical studies on real prorammers, C showed to be behind in every metric except raw performance, its history tells us why, and logically follows that they should've built on better, proven foundations. Those that did got more reliable, easy to understand, easy to maintain systems.
Of course, don't just take my word for it: Thompson eventually tried to make a better UNIX with Plan 9, but more importantly a better language. He, Ritchie, and Pike all agreed every feature in their language. That language was Go: a clone of ALGOL68 and Oberon style with modern changes and updates. Took them a long time to learn the lessons of ALGOL that came years before them.
There have to be attributes of the language and tooling that appeal to Russian programmers more than others for whatever reason. I wonder what those are, both aspects of Russian style and language that appeals.
After Assembly, IBM used mostly Modula-2 and PL/M to write their mainframe OSes, e.g. OS/400, migrating the code slowly to C++ afterwards.
Also OS research at DEC and Compaq was done in Modula-2+, which gave birth to Modula-3.
The strong types did get in the way of interfacing with a lot of OS level stuff that wanted to deal in void pointers tho.
I was doing this in other languages way after you were dealing with Modula-2 on even older hardware. So, I'm curious as to how you and others then handled that problem? What were the tactics?
C got popular because it allowed low-level code to be implemented in something higher-level than assembler and moderately portably too (machines back then where as lot more dissimilar than today). It's hard to disassociate C from Unix, the former making the latter easy to port. A symbiotic relationship.
It's important to remember C was designed for the processors of the time, whereas processors of today (RISC-V included) are arguebly primarily machines to run C. C has brought a lot of good but also a lot of bad that we are still dealing with: unchecked integer overflows, buffer under- and overflow, and more general memory corruption.
No ISA since the SPARC even tries to offer support for non-C semantics.
You mean like Burroughs was doing with Extended Algol in 1961 for its Burroughs B5500 system, implemented with zero lines of Assembly, while offering all the features for memory safety that C designers deemed too hard to implement?
Archeology of systems programming languages is great to dismitify the magic aura C seems to have gained in the last 20 years with the raise of FOSS.
And FullyFunctional's point was largely about portability. So, if Burroughts took that OS and tried to port it to, say, a PDP-11, how well do you think that would have worked?
PDP-11 was more powerful than Burroughs systems.
Sure. It makes your "zero lines of assembly" claim somewhat less impressive, though...
Intrinsics semantics can be validated by the compiler.
"In fact, all unsafe constructs are rejected by the NEWP compiler unless a block is specifically marked to allow those instructions. Such marking of blocks provide a multi-level protection mechanism."
"NEWP programs that contain unsafe constructs are initially non-executable. The security administrator of a system is able to "bless" such programs and make them executable, but normal users are not able to do this."
"NEWP has a number of facilities to enable large-scale software projects, such as the operating system, including named interfaces (functions and data), groups of interfaces, modules, and super-modules. Modules group data and functions together, allowing easy access to the data as global within the module. Interfaces allow a module to import and export functions and data. Super-modules allow modules to be grouped."
Sounds familiar? Rust like safety in 1961, but lets praise C instead.
This is my one big beef with the RISC-V ISA, after I went over it with a fine toothed comb, otherwise it's brilliant to my untutored eyes, and the ISA doc, a paper I read about the same time about how difficult it was to make ISAs super fast, etc. helped explain why, e.g. the VAX ISA took so long to make faster, and was probably doomed, while Intel got lucky? with x86.
Anyway, the large integer math e.g. crypto people are not happy about it, the lack of support for anything other than being able to check for divide by zero means their code will be slow compared to older ISAs that support it. And their justification in the ISA doc is that the most popular languages nowadays don't check for this, which is not the sort of thing I want to see for an ISA that's billing itself as a big thing for the future. I very much got Worse Is Better/the New Jersey vibes from this....
Although this discussion a couple of weeks ago implies it's getting some consideration: https://groups.google.com/a/groups.riscv.org/forum/#!searchi...
Re. large integers: the RISC-V 2.0 (non-priv) ISA is frozen solid so any change would have to be an extension. I've been gently pushing for the mythical vector extension to be at least friendly to large integers.
I've seen no evidence that C brought us anything good. Anything I can do with C I can do with a Modula-2-like language with better safety and compiler robustness. If a check is a problem, I can disable it but it's there by default. Nonetheless, I'll look into any benefits you think C brought us that alternatives wouldn't have done.
Extended Algol was available in 1961.
PL/I and PL/M variants are older than C.
Mesa was already being used at Xerox PARC when C was born.
There are quite a few other examples that anyone that bothers to read SIGPLAN papers and other archives can easily find out.
> Extended Algol was available in 1961.
> Mesa was already being used at Xerox PARC when C was born.
It still doesn't invalidate the fact that it had more memory safety features as C, one decade earlier.
Regarding Mesa that is like calling x86/x64 firmware an interpreter for Intel bytecode.
Most literature in those days used bytecode to refer to Assembly processed by CPU microcode.
Any systems programming language has them.
However there is a difference between having them as an addition to strong type features and them being the only way to program in the language.
For example, using arrays and strings didn't require pointer arithmetic.
In C all unsafe features are in thr face of the programmer. There is no way to avoid them.
In Mesa and other similar languages, those unsafe are there, but programmers only need to use them in a few cases.
C was designed with PDP-11 instruction set in mind. For many years that the C machine model no longer maps to the hardware as many think it does.
End result a lot of CS students learned C in school.
Other languages tended to be tied to a particular machine, cost money, or weren't really suitable (Fortran/p-system Pascal) for the stuff people wanted to do.
The main alternatives were ALGOL60, ALGOL68, and Pascal. They were each designed by people who knew what they were doing in balancing many goals. They, esp ALGOLS, achieved good chunk of most. A subset and/or modification of one with gradual progression for improved hardware would've led to better results than C with same amount of labor invested. On low end, Pascal ended up being ported to systems from 8-bitters to mainframes. On high end, Burroughs implemented their mainframe OS in an ALGOL with hardware that enforced its safety for arrays, stacks, and function calls.
In the 80's, things like Modula-2, Concurrent Pascal, Oberon, and Ada showed up. I'll understand if they avoided Ada given constraints at the time but the others were safer than C and quite efficient. More importantly, their authors could've used your argument back then as most people were doing but decided to build on prior work with better design. They got better results out of it, too, with many doing a lot of reliable code with very little manpower. Hansen topped it off by implementing a barebones, Wirth-like language and system called Edison on the same PDP that C was born on.
"The comment you were responding to reads to me like someone providing historical context, not an expression of an ideal."
The historical context in a related comment was wrong, though. It almost always is with C because people don't know it's true history. Revisionism took over. The power that revision has in locking in people to C is why I always counter it with actual evidence from the inventors of the concepts. Two quick questions to illustrate why I do that and test if it's sensible:
1. When hearing "the programmer is in control," did you think that philosophy was invented by Thompson for C or it was stolen without early credit from BCPL along with its other foundations?
2. Did you know that those were not designed or validated as a good language at all? And that they were grudgingly accepted just to get CPL, an ALGOL60 variant, to compile on crappy hardware whose problems no longer exist by chopping off most important attributes?
If you knew 1 and 2, would you still think C was a "well-designed" language for systems programming? Or a failed implementation of ALGOL60 whose designers were too limited by hardware? If No 1, we should emulate the heck out of C. If No 2, we should emulate ALGOL60-like language that balances readability, efficiency, safety, and programming in the large. Btw, all the modern languages people think are productive and safer lean more toward ALGOL more than C albeit sometimes using C-like syntax for uptake. Might be corroboration of what I'm saying.
I'll back out of the PL debate; this isn't really the place and there's too much to be said on the topic than will fit in this margin.
That's generally true so long as the winner is open. C was. Now, it runs on everything.
> [...] or it was stolen without early credit from BCPL along with its other foundations?
Show me where it references BCPL. As I see it, Ritchie only references B as the inspiration for C. From the document, anyone who hadn't read the B paper (most people) would think they, Thompson, and/or Ritchie mostly came up with C's features and strengths. They eventually became known as the "C philosophy" and C-like languages.
Whereas, it was Martin Richards that invented both of those plus the idea of stripped down languages for writing tools on minimal hardware. He should get credit for creating the "BCPL philosophy" that the "programmer is in control" plus inspiring a BCPL-like language called C that allowed UNIX's success. Instead, early work focused all attention on Thompson and Ritchie with UNIX papers/software making sure that was the case by adding momentum.
At least I got to see the guy in action in the video linked to my History of C page. It shows his team, the constraints they worked on, why they dumped most of ALGOL, and him working with it later on. That presentation put things into an honest perspective where I can see his and UNIX founders' contributions to overall success with due credits (and/or critiques) to each.
IBM: assembler & PL/S
GE/MIT (Multics): PL/I(-ish)
Control Data: Cybill (Pascal-ish) & assembler
Intel and Digital Research: PL/M
DEC: MACRO-11 & BLISS
DG: assembly & PL/I
Xerox: assembly, Smalltalk, Mesa/Cedar
Look at all the C applications, the C ABI that every other language exports for FFI, all the operating systems, big and small.
While I enjoy programming in others more, C is the single greatest most versatile language. All the languages I like all can call C a common ancestor.
Are there better options today? Maybe. Is there a question that C deserves a huge amount of credit? Not at all.
Also, on optimization, I don't know exactly what you mean by that. I assume you mean b/c it's got a lot of undefined behavior. There is so much optimization going on in tools like LLVM that I question it's accuracy.
Anyway, I agree that there are more pleasurable and portable languages (and now just as performant/lightweight with Rust).
For example: Pointers, casts, unions, casting pointers to different types, and doing all of this across procedure call boundaries all challenge/break data flow analysis. You can pass a pointer to a long into a procedure, and the compiler has no idea what other compilation unit you are linking to satisfy the external call. You could well be casting that pointer-to-long to something totally crazy. Or not even looking at it.
I was associated with a compiler group experimenting whole-program optimization -- doing optimization after a linking phase. This has huge benefits because now you can chase dataflow across procedure calls, which enables more aggressive procedure inlining, which enables constant folding across procedure calls on a per-call-site basis. You might also enable loop-jamming across procedure calls, etc. C really is an unholy mess as far as trying to implement modern optimizations. The compiler simply can't view enough of the program at once in order to propagate analysis sufficiently.
Now imagine Y was slightly worse than X, based on some magic objective measure. Even though it's the backbone of all these amazing things, Y has actually had a negative impact on the world. We could all have computers that are equally easy to program and use but slightly less buggy, if only that effort has focused on X instead.
So when you say there's no question that C brought a huge amount of value based on usage stats, you're wrong. It can definitely be questioned.
All the undefined behaviour in the C spec enables optimisations, yes, but it would be so much better if those optimisations didn't require a lot of mental labour by both compiler writers and programmers.
Now, := is another situation. Let me ask you: what did = mean before you learned programming? I'm guessing, as math teaches, it meant two things were equal. So, making = be the conditional check for equality is intuitive and reinforced by years. A colon is English for something similar where you assign one thing a more specific meaning. So, := for assignment and = makes sense even if a better syntax exists for := part. W
hat makes no sense is making the intuitive operator for equality (=) into assignment then equality becoming ==. We know it didn't make sense because Thompson did it when creating B from BCPL. His reason: personal preference. (sighs) At the worst, if I were him, I'd have done == for assignment and = for equality to capitalize on intuition.
It makes much more sense if you consider it from a logic perspective. '=' is material implication (→), and '==' is material biconditional (↔) - which is logically equivalent to (a → b) ∧ (b → a)... so two equal signs (the logical conjunction is redundant so you drop it). Computer science favors logic, so I'd be pretty surprised if it was simply personal preference.
That isn't either assignment or equality testing.
"Isnt... equality tedting"
Contradicted yourself there. The equals is used in conditionals with other logical operators to assert equality is true or not. That's a legit use for it.
You can't use that for equality testing at all. If the assumption is wrong, the compiler will generate bad code.
Note: This discussion on an "obvious" problem shows how deep the rabbit hole can go in our field in even the simplest thing. Kind of ridiculous and fun at same time haha.
In this case, though, it would be fine to use something smaller like braces, brackets, or parentheses. The reason is the intuition learns it quickly since the understanding is half there already: the braces visually wrap the statements. Not a big deal to me like equality was.
car: ld r2, [r3-1]
In the late 90s I would routinely get bus errors while running Netscape on SPARC Solaris machines, presumably due to corner-case unaligned memory accesses. x86 processors perform unaligned loads and stores, but at a slight speed penalty, and with the loss of atomicity.
Altera's tools are much nicer for the beginner to use than ones offered by Xilinx. This board also has a lot of peripherals you can interface with (ADC, accelerometer, etc.) to grow your skills.
- Memory leaks (grows from about 1GB to 11GB and starts swapping in a couple of minutes when editing existing IP)
- Single-threaded synthesis is slow (though isn't limited to Vivado specifically)
- Failing after 20 minutes of synthesis because of errors that would be easy to check at the start
- Placing debug cores can result in needing to synthesise, find something went wrong, delete debug core, re-synthesise, re-add debug nets, synthesise again...
- Aggressive caching results in it trying to find nets which were changed and no longer exist, despite re-creating the IP in question from scratch
- Vivado creates a ridiculous amount of temporary files and is a royal pain to use with version control (there is an entire document which details the methods to use if you want to create a project to be stored under version control)
I've been playing around with IceStorm for the iCE40 device and it's an absolute joy to use: fast, stable and simple. I appreciate that there are a lot of complex tools and reports which Vivado provides, but I would much rather use an open source tool like IceStorm for synthesis alongside the advanced tools from Vivado.
I'm genuinely interested. I've been really curious about FPGAs but I don't know what a good use case there is for them for a hobbyist.
Why would you use an FPGA? Mainly when you have specialized requirements that can't be met by processors. FPGAs mainly excel in parallelization. A designer can instantiate many copies of a circuit element like a processor or some dedicated hardware function and achieve higher throughput than with a normal processor. If your application doesn't require that, you might like them for the massive number of flexible I/O pins they offer.
Lastly, using FPGAs as a hobby is rewarding just like any other hobby. Contrary to popular belief, you don't "program" with a programming language like with a processor. You describe the circuit functionality in a hardware description language and a synthesizer figures out how to map it to digital logic components. You get a better insight into how digital hardware works and how digital chips are designed. When you use them as a hobby, you get the feeling that you are designing a custom chip for whatever project you are working on. Indeed, FPGAs are routinely used to prototype digital chips.
Here's just one simple example: controlling servos. Sure you can do that with most uC simply enough, say using timer interrupts, but what if I need to control 100 of them? In logic, I can just instantiate 100 very trivial pulse controllers where as this typically is impossible with a microcontroller or at the very least leaves no cycles free for any computation.
Another example: you want to implement a nice little crypto, like ChaCha20. Even though ChaCha20 is efficient, it's still a lot of cycles for a microprocessor, where as an FPGA can implement this is a nice pipeline, potentially reaching speeds like 800 MB/s, while still having ample resources left for other work.
I could go on.
"The Jazelle extension uses low-level binary translation, implemented as an extra stage between the fetch and decode stages in the processor instruction pipeline. Recognised bytecodes are converted into a string of one or more native ARM instructions."
You'll learn: Pipelines, Caches, Why cache misses are so painful and a whole host of CPU performance stuff that "looks" esoteric will become plain as day.
Honestly the FPGA in data center stuff is probably mostly hype for most people, but toying with an FPGA is super fun.
I haven't tried this particular one.
If so, then (naively) could one pack ~10 on that single FPGA? Or does the 'packing overhead' become a big problem? Or does the design use more (say) multiply units pro-rata, so that they become the limiting factor?
You can easily add some logic to let CPUs share pins, too.
That project is actually kind of what convinced me to go ahead and buy the CV. It was fun, but it just felt like a lot of work making things that I could just already have on the board.
http://www.clifford.at/yosys/ (Verilog synthesis)
https://github.com/cseed/arachne-pnr (Place and route)
http://www.clifford.at/icestorm/ (Bitstream generator and programmer)
/ # uname -n
/ # cat /proc/cpuinfo
CPU info: PUNT!
They actually produced some silicon recently, but unfortunately they don't have plans to sell any. It would be so much fun to play around with a fully open micro!
THIS WILL NOT STAND! WHO WOULD WANT TO USE THIS???
Note: The manual in Wikipedia link describes full ISA and features. It was actually a bad-ass CPU design. I'd really like to know what process node the remaining one's run at. Couldn't find it, though.
There seem to be too many CPU options. Multiply and divide as an option belongs down at the 50 cent 8-bit CPU that runs a microwave oven, not on something that runs Linux. Optional floating point is maybe OK, although you should be able to trap if it's not implemented and do it in software. You don't want to have zillions of builds with different option combinations. That drove MIPS users nuts.
The protection levels are still undefined; "User" mode is designed, but what happens at Supervisor, Hypervisor, and Machine mode?
If this were well-defined and cheap tablets were coming out of Shentzen using it, it would be important.
What? No, it's an Instruction Set Architecture. It's the language the CPU speaks.
> Multiply and divide as an option belongs down at the 50 cent 8-bit CPU that runs a microwave oven, not on something that runs Linux
Irrelevant, nobody has to know your CPU doesn't implement multiply/divide, not even the OS!; you trap to machine mode and life goes on. And not every RISC-V core needs to run Linux.
> but what happens at Supervisor, Hypervisor, and Machine mode?
You read the manual that exists, implement the privileged spec, and it'll run Linux. Just because it's not frozen doesn't mean that many RISC-V cores can't already boot a number of different operating systems.
> If this were well-defined and cheap tablets were coming out of Shentzen using it, it would be important.
There are already shipping products with RISC-V cpus (like in cameras). Just because you're not the customer doesn't mean this work isn't very important to other people.
Yes you can. You just won't know about it. The open source CPU cores tend to go into things like CMOS cameras, ISPs, video processing chips, or FPGA projects.
But RISC-V is exciting because it's the first open-source ISA that really has a good chance of becoming an open 32-bit microcontroller or even a phone/tablet CPU.
There has been silicon chips taped out as university projects. No you can't buy them, but it means there's a pretty low barrier for a commercial company to produce one. But I think the compilers just needs to mature more, especially the LLVM one (which would be useful for making IDEs etc.)
How does Apple's A9X compare to a Xeon E7?
RISC-V is excellent for hobbyist boards, mobile CPUs, high-efficiency low-performance servers, as a base for DSPs, high-efficiency network equipment, etc. Everything from the DSP in your phone to a Rasperry Pi to a Macbook Air.
POWER8 is excellent for high-performance computing, supercomputers, and workstations (Talos when?! :( ). Things like CAPI, Nvlink, etc—stuff that's well outside the scope of RISC-V (though I suppose you could make an ISA targeting HPC based on the RISC-V base integer ISA, but it would only tangentially resemble RISC-V by the time you're done).
Personally I'm looking forward to using both of them in the future!