OCLC’s research staff were instrumental in the development of the initiative, starting with a hallway conversation at the 2nd International World Wide Web Conference in late 1994. OCLC researchers Stuart Weibel and Eric Miller, OCLC Office of Research Director Terry Noreault, Joseph Hardin of the National Center for Supercomputing Applications (NCSA), and the late Yuri Rubinsky of SoftQuad, were remarking about the difficulty of finding resources on the Web. Their discussion provided the impetus for development of the Dublin Core Metadata Element Set, now known simply as “Dublin Core,” now an international metadata standard.
True, but it does depend on your field. I was fortunate/unfortunate to have my feet in both over the years, and could not afford either early on, when I truly needed access.
I have an original Das Keyboard, and I was startled to see it in a very old college-era photo of my then desk. Because it's still right there working perfectly. Everything else has been replaced thrice over.
I've been a huge fan of CHERI for years, but unfortunately the hardware is super closed (huge research institutions only). Hopefully there will be a hardware option for hobbyists one of these years (the virtual machines don't interest me).
I wonder why emulators aren't interesting for a hobbyist. However, there are FPGA implementations [1], and micro-controller-type systems on FPGA available commercially [2].
[3] has a list of publications for the rigorous engineering agenda.
> I wonder why emulators aren't interesting for a hobbyist.
I have the Rust programming language to fill the software part of this niche. The hardware part of CHERI is what makes it interesting to me.
(e.g. I've tinkered with Rust bootloaders before, and it doesn't matter too much whether the emulator is CHERI or not since Rust itself lets me express memory safety in the type system.)
> CHERI doesn’t guarantee that your code is free from memory-safety errors, it guarantees that any memory-safety bugs will trap and not affect confidentiality or integrity of your program.
That sounds an awful lot like ensuring your code is free from memory-safety errors. A language which always traps on erroneous memory accesses is a memory safe language, so if CHERI really guarantees what that sentence says, then C on CHERI hardware is memory safe.
If it traps on all things that would otherwise be memory-unsafety then it is memory safe. If trapping doesn't count as memory safe, then e.g. Rust isn't memory safe, since it traps on OOB accesses to arrays.
CHERI capabilities are memory-safe, because they trap on any attempted memory unsafety. Safe Rust is memory-safe, assuming all the underlying Unsafe Rust traps on any attempted memory unsafety.
C is not memory-safe, even on CHERI, because it has to be trapped by CHERI; it cannot catch itself.
Safe Rust is memory-safe on its own, because memory unsafety can only be introduced by Unsafe Rust; Safe Rust has no unsafe operations. Assuming the Unsafe Rust is sound, Safe Rust cannot cause memory safety to be violated on its own. (You can do `/proc/mem` tricks on Linux, but that's a platform thing...)
I'm not sure if we are talking past each other or what.
1. Non-unsafe rust is memory-safe because otherwise unsafe operations (e.g. out-of-bounds accesses on arrays) are guaranteed to trap.
2. A typical C implementation on typical non-CHERI hardware is not safe because various invalid memory operations (e.g. out-of-bounds, use after free) may fail to trap.
3. A typical C implementation on CHERI hardware guarantees that all otherwise memory-unsafe operations trap.
I think we both agree on #1 and #2. Am I wrong about #3? If I'm not wrong about #3, then what makes you say that #3 is not memory-safe?
In #3, the C implementation is not what's memory-safe, that's what I've been trying to say. CHERI is memory-safe, but the C isn't what actually guarantees the memory safety. You can dereference a null pointer on CHERI. The architecture can trap it, but that doesn't change the fact that C actually attempted to do it and therefore would not be memory-safe. Only the system as a whole prevents the memory unsafety.
Aha. I think we agree. I originally said "C on CHERI hardware is memory safe" by which I meant "the system as a whole of (C code + CHERI hardware)" is memory safe, but you seemed to think I meant (C code) is memory safe.
I actually had thought you meant something like "the C language is memory safe when run on CHERI". If you mean that running a C program on CHERI prevents memory safety from actually being broken, then yeah, I guess we do agree. On CHERI, even if the C program alone isn't completely sound, and so attempts to violate memory safety, the attempt won't actually succeed, so memory safety won't actually be violated.
I'm not talking about writing robust software, I'm talking about having fun - Rust's type system already provides the type of fun that I would have gotten from a CHERI emulator. That's why only getting to own physical CHERI hardware would truly pique my interest.
That article indeed is quite timely. I do agree with it. Slightly different angle though.
Morello boards are hard to come by, but there have been efforts to offer cloud-computing style use of them, especially now that bhyve support exists; if you're interested I can try to find out more (I'd offer you time on my cloud-computing Morello cluster from MSR, but it's offline for silly reasons). The "Big CHERI" RISC-V FPGA boards are indeed quite expensive, but CHERIoT-Ibex runs on the Arty A7 or the purpose-built Sonata board, and those are much more reasonable. (I'd still love to see it brought up on cheaper boards, too...)
Morello is definitely what I'd been eyeing for a while. AFAICT those are real systems (somewhat like HiFive Unleashed) and not just embedded chips (although they are embedded chips too). I'm kind of bored of microcomputers (Raspberry Pi et al.).
This project shares so many foundational computing ideals this represents the best aspects of “the many eyes surfacing not just bugs but assumptions”.
I’m actually more interested in developing from the riscV branch.
Thinking about computers in the 100 year time frame I decided that processing “local” symbols is better than more sophisticated processes. The risc-V direct access to symbol development is where I want to start defining my “local” symbols.
The hope is that this line of thought will reduce the number of abstraction layers between the user and their environment.
CHERI having these concepts defined for risc-v creates a foundation for local processing
Of symbols with a “good computing seal of integrity. I also see it as leading to less re-invention which should help progress.
Hi, sorry, this is all quite a bit above my head but I am interested in alternate architectures, would you mind expanding on what kinds of symbols you're talking about? My mind jumps to the members of an instruction set, but I assume you would have called them instructions in that case, what's the alternative to a local symbol?
The symbols I'm discussing were first documented by Claude Shannon. When I'm discussing symbols or large circuits I consider them interchangeable views of the same thing. They represent each other.
I'm actually a designer so if I wanted to describe them as instructions I easily could. I'm a stickler for language so I believe that the use of the term instructions limits the conversation because it is too specific to communicate what I'm thinking about.
I would say an early example local symbol development might be libC. Our current computing environment evolved from the Personal Computing revolution and the internet. This came about through commercial interests and public adoption. I see this development as reaffirming the ideals first proposed by the "mother of all demos".
What I consider a "local" symbol is being demonstrated by Apple with their on device ML. The highest ideal to me is that everyone develops their own personal symbol table of digital services. I see CHERI as offering the fast track to that type of computing. I see this a integrating rather than programming.
Intriguing, thanks, I'll have to marinate on that.
I'm a big fan of the mother of all demos. Have you happened to have read "what the dormouse said" ? It has a great narrative of that event including all the behind the scenes action that Stewart Brand contributed, like setting up the TV broadcast truck on top of the hill to relay the video output and importantly the sounds of the mainframe back in Menlo park.
I haven't read that book yet. I have been meaning to but just haven't gotten to it.
I can't believe we can watch the "mother of all demos". That alone proves its significance.
My favorite aspect of the demo is that it reveals that our human computing desires are universal and have been there from the start. It has taken generations to achieve the mass adoptions of these ideas. This realization takes the mystique away from the BigCo and their services. They are simple human desires for technology that were obvious from the beginning.
reply