Hacker News new | past | comments | ask | show | jobs | submit login
SUBLEQ and DawnOS (2017) (esoteric.codes)
69 points by weare138 on April 12, 2021 | hide | past | favorite | 32 comments



> basically every linux software needs hundreds of packages to downloaded separately from internet, as they are created by scripters who randomly using external libraries to even perform an rgb to bgr conversion, and not real programmers who have a clue about writing programs. the newest kde needs 30 seconds to bring up the start menu on my 1,6 ghz atom netbook.

Had a chuckle here. I have to agree we’ve become complacent with our hardware making up for our software.

> i originally planned x86, but i realized how bad it is - after all, current x86 is the result of approx 30+ years of work of 1 million hardware developer, all added his own poop into it to have a cpu. some idiot waked up at morning, and decided to add a MOV with his very own shitty prefixes and various different encodings. another idiot waked up at morning, and decided to add a floating point instruction which adds integers to floating point, and stores the results in a floating point register. another idiot waken up, and decided to add an opcode for adding 4 numbers simultanously. 1500 idiot waken up, added his own opcode, various memory addressing modes, and todays x86 has been born, with one billion transistors minimum just to have an operating system boot.

This man is hilarious


Anyone interested should take a look at CRYTOLEQ, which extends the premise to homomorphic encryption.



It's a cool idea, but wouldn't every program have full access to all of memory?

The m68000 had no memory protection and could technically run an operating system, but it came with the implied request that no program overwrite the operating system's working RAM.


That was one of the more common causes of crashes back in the AmigaOS days; one program crashed and while doing so it screwed up the OS and other running applications as well.

It worked quite well as long as your applications were well-behaved though.


The Amiga had a preemptive operating system, but because of the lack of memory protection Linus Torvalds called it "pseudo-preemptive" and considered it equivalent as cooperative.

https://en.wikipedia.org/wiki/Exec_(Amiga)#cite_note-Torvald...


Stability-wise, definitely -- a single misbehaving piece of code could do whatever it wanted as nothing prevented an application from taking over completely (or more commonly, just crash the entire computer)

When doing it with well-behaved applications, on the other hand, AmigaOS was multi-tasking on a whole other level compared to contemporary operating systems using actual cooperative multitasking.

It was a neat little OS. Fast as hell and with several little features that I still find myself missing from modern OS'es. Too bad Commodore screwed up.


I don't think that full access to all memory and SUBLEQ architecture must happen at the same time. For instance, you can already run a 100% ring-0 operating system on top of x86_64 (e.g. DOS, TempleOS, Mezzano), but it should also be possible to equip a SUBLEQ CPU with some kind of programmable MMU with memory protection that will cause segfaults to happen when invalid addresses are accessed.

(And then you'll need syscalls to actually jump to the proper ring and access that memory, and such stuff, which might go against the political agenda of DawnOS.)


To think of it... how viable such a "computer" would be? Would it be prohibitively inefficient? Or the other way around? What about parallelization - very easy, impossible at all, or something in between?


The "trick" of single-operation ISAs is that in order to achieve Turing completeness, they have to pick an instruction that effectively "encodes" multiple different instructions.

So if you actually wanted to make and optimize one enough to be a "real" computer, you'd effectively end up with a conventional RISC architecture, where the only difference would be this weird way to encode instructions sprinkled on top of it.

Once you're there, it's just Turing machines all the way up: build a compiler, build an interpreter, run nodejs, whatever. Processors are so fast these days you could probably live with the inefficiency, assuming sufficiently smart compilers are available for the architecture.

It's just... not that different. Computers are weird like that.


Not prohibitively inefficient but fundamentally inefficient.

SUBLEQ was designed to be a simple processor that could be implemented and programmed in a course. It was not designed to be practical or efficient, only demonstrative.

From a processor design prospective, the problem with SUBLEQ as an instruction set is that its instruction is both an arithmetic operation and a branch. And some things have to be implemented using self-modifying code. These are impediments to branch prediction, instruction pipelining and reordering, and speculation.

To make the fastest processor possible for SUBLEQ, one would have to basically have to recompile the code into a better instruction set. And that would be hard as it would require recovering the basic block structure which is hidden by the self-modifying code.


> What about parallelization

If you mean multi-core, I guess you could do memory-mapped registers to control additional cores from the first one? In its simplest form, you write an address (reset vector) into a predetermined memory location, and that core starts executing from that address. IIRC real multi-core CPUs work something like that.

If you mean multitasking on a single core, you should definitely be able to implement the cooperative kind. Preemptive multitasking requires timer interrupts.


The major inefficiency I see at the hardware level is the lower amount of work per instruction. This would lead to more data being fetched by the CPU to execute (though with one instruction, I suppose you only need the parameters). I'm sure a large cache would mitigate this problem somewhat.


Wild and wonderful!

I wonder if there's a connection between the Subleq op and relu op. If so, perhaps you could parallelize the chip and have it instantly optimized for neural networks. Seems like you could go a long way.


Well I just posted a link to my little subleq article in another thread the other day but in case someone didn't see it:

https://portal.mozz.us/gemini/biomimetic.me/subleq.gmi

On that page I have a little subleq program written using HSQ (High-Level Subleq, a C-like language that compiles subleq) that uses ANSI graphics for a simple D&D dice roller.

One thing I think that should be a little different than what is on Rosetta Code for example is to make sure that the STDIN input does not require an ENTER key or something like that. Because that allows for more interactive programs rather than line-based.

Here is my little C subleq interpreter that does it that way: https://github.com/runvnc/sq/blob/master/sq.c .. and it also flushes STDOUT quickly if I recall. Mine is based on this one: https://github.com/davidar/subleq/blob/master/src/sq.c

Maybe using ANSI escape codes is cheating. But if you don't mind cheating, just having the STDOUT allows you to also do vector graphics, if your terminal happens to support that. Such as an xterm which can do vector graphics (draw lines) using Tektronix 4010/4014 escape codes.


Unfortunately, this article manages to get the single example execution entirely wrong.

> We begin by subtracting 7 from 7 (the locations of A and B), and will store the result (0) in location B.

Here the example gets confused as it shows the 0 result stored not in location B (address 4) but in the middle of the executing instruction (address 1). This mistake is then repeated for the next instruction.


For what it's worth, the author got banned from the OSDev forums in record time.


And every other forum where he obsessively spammed his incoherent and nonsensical rants.

He may be a competent programmer but he certainly has mental health issues, AFAIK.

Googling DawnOS will find you quite a few websites discussing it, also thorough debunkings of the supposed virtues of his "SUBLEQ" architecture.


"basically every linux software needs hundreds of packages to downloaded separately from internet, as they are created by scripters who randomly using external libraries to even perform an rgb to bgr conversion, and not real programmers who have a clue about writing programs"

I... am inclined to violate HN rules, let's just say I closed the tab at that point.


I haven't seen this kind of bombastic rant applied to programming since the early days of Slashdot. Due credit to the author for putting their money where their mouth is and actually building a GUI operating system on this ISA.


Ah /.

How remarkable that it fell away from vogue; a fixture of the early web, and a victim of the dot-bomb.


Perhaps you might have come away from it with a better opinion had you continued reading and realized that subleq is a OISC and any substantial program implemented in it guaranteed to be a practical joke.


I wonder if you were transported back to 1940 and had to build a computer out of discrete valves or other components to help the war effort ... Is this what you should build? Or would something else be more sensible?


The fu? This guy has no idea what he is talking about.

> there are also lack of technical knowledge at linux side, for example, most linux distributions cant even detect the cpu type, and force a kernel using PAE or SSE kernel on a 6x86 cpu, which of course will crash at boot.

Uh yes it can? cat /proc/cpuinfo? What he is saying (incorrectly) is that Linux is compiled for i686 -- which it often is, so that you don't get illegal instructions when you run programs. No one is preventing you from recompiling with optimizations and -march=native.

Theres so many levels of crazy in just the first paragraph.



SUBLEQ seems to be an implementation of “x86 MOV is Turing complete” though I’ve yet to see a use for a single-instruction chip (in theory maybe it could be super pipelines or super efficient).


The SUBLEQ is the second citation in the _mov is Turing-complete_ paper: F. Mavaddat and B. Parhami. URISC: The Ultimate Reduced Instruction Set Computer. International Journal of Electrical Engineering Education, 25(4):327–334, 1988.


There are several OISC (one instruction set computer) https://esolangs.org/wiki/OISC

I used BitBitJump for a hobby project once, although that's technically not Turing-complete (it has a fixed pointer size, like C). I also like Flump, which is unbounded.


Wonder what this might look like if implemented via memresistors -- such that the memory and processing were a single logical unit.


It's hard not to admire a project that not only gives RISC-V a run for it's money, but imperialism too.


I don't mind the start of the article. I regret wasting time reading the actual Q&A - what a bunch of trash. Filled with errors and grandstanding and mixed in with mostly incoherent ranting.

I'd strongly recommend skipping this one.


These are the kind of heros we need. DawnOS, TempleOS and I am sure a lot of others that I do not know about yet.

Both are amazing achievements.

For someone to look at where we are today, and say it sucks and -tries- to do something about should be massively encouraged.

Perhaps neither will go anywhere but attempts can fuel other ventures and spawn ideas.

(and yes, I still think TempleOS can give people ideas even though the man behind the amazing effort is dead)

I also think that inclusiveness and code of conduct needs to be revised to include non neurotypical people of all sorts.

Unfortunately, they often end up the target of hate, ostracism and ridicule.

I can say as a non-neurotypical that there is very little room in today's world for us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: