Hacker News new | past | comments | ask | show | jobs | submit login

This is from 2012.

He's wrong of course about being the only user.

I love this software, along with k/q. I admire the work Mr. Shields has put into this project. I especially like the use of musl and provision of static binaries.

I do not use Perl, Java, Python, Javascript, Go, Rust, Closure, etc., etc. Whatever the majority of people are recommending, that is generally not what I use. It just does not appeal to me.

I guess I am stubborn and stupid: I like assembly languages, SPITBOL, k/q, and stuff written by djb. Keep it terse. Ignore the critics.

Yet this is now on the front page of HN. Maybe because it is the weekend? I really doubt that the software I like with ever become popular. But who knows? Maybe 10 years from now I will look at this post and marvel at how things turned out.

There is no "structured programming" with spitbol. No curly braces. Gotos get the job done. Personally, I do not mind gotos. It feels closer to the reality of how a computer operates.

Would be nice if spitbol was ported to BSD in additon to Linux and OSX. As with k/q I settle for Linux emulation under BSD.




Fascinating, I'm curious, what do you do with such languages and tools? I'm not trolling or trying to start a flame war, I'm genuinely curious as to the best use of these tools. Also - are you doing it as a hobbit, or commercially (or both)? How did you discover this tech?


> are you doing it as a hobbit

Thanks for that typo, I laughed immediately after drinking some tea and now it's all over my monitor :(


Bilbo Bugends


Clearly as a 'hobbit' lol


> It feels closer to the reality of how a computer operates.

In an alternate reality, high-level languages would be wired directly into our "hardware", via microcode or FPGA's or what have you. Software systems would be designed first, then the circuitry. In this alternate reality, Intel did not monopolize decades doubling down on clock speed so that we wouldn't have time to notice the von Neumann bottleneck. Apologies to Alan Kay. [0]

We should look at the "bloat" needed to implement higher-level languages as a downside of the architecture, not of the languages. The model of computing that we've inherited is just one model, and while it may be conceptually close to an abstract Turing machine, it's very far from most things that we actually do. We should not romanticize instruction sets; they are an implementation detail.

I'm with you in the spirit of minimalism. But that's the point: if hardware vendors were not so monomaniacally focused on their way of doing things, we might not need so many adapter layers, and the pain that goes with them.

[0] https://www.youtube.com/watch?v=ubaX1Smg6pY&t=8m9s


Don't we have cases of this alternate reality in our own reality? Quoting from the Wikipedia article on the Alpha processor:

> Another study was started to see if a new RISC architecture could be defined that could directly support the VMS operating system. The new design used most of the basic PRISM concepts, but was re-tuned to allow VMS and VMS programs to run at reasonable speed with no conversion at all.

That sounds like designing the software system first, then the circuitry.

Further, I remember reading an article about how the Alpha was also tuned to make C (or was it C++?) code faster, using a large, existing code base.

It's not on-the-fly optimization, via microcode or FPGA, but it is a 'or what have you', no?

There are also a large number of Java processors, listed at https://en.wikipedia.org/wiki/Java_processor . https://en.wikipedia.org/wiki/Java_Optimized_Processor is one which works on an FPGA.

In general, and I know little about hardware design, isn't your proposed method worse than software/hardware codesign, which has been around for decades? That is, a feature of a high-level language might be very expensive to implement in hardware, while a slightly different language, with equal expressive power, be much easier. Using your method, there's no way for that feedback to influence the high-level design.


Don't forget LISP machines! I think they might be the perfect example of what he was referring to.

https://en.wikipedia.org/wiki/Lisp_machine


Indeed!


I just wanted to thank you (belatedly) for a thoughtful reply. The truth is, I don't know anything about hardware and have just been on an Alan Kay binge. But Alan Kay is a researcher and doesn't seem to care as much about commodity hardware, which I do. So I don't mean to propose that an entire high-level language (even Lisp) be baked into the hardware. But I do think that we could use some higher-level primitives -- the kind that tend to get implemented by nearly all languages. Or even something like "worlds" [0], which as David Nolen notes [1, 2] is closely related to persistent data structures.

Basically (again, knowing nothing about this), I assume that there's a better balance to be struck between the things that hardware vendors have already mastered (viz, pipelines and caches) and the things that compilers and runtimes work strenuously to simulate on those platforms (garbage collection, abstractions of any kind, etc).

My naive take is that this whole "pivot" from clock speed to more cores is just a way of buying time. This quad-core laptop rarely uses more than one core. It's very noticeable when a program is actually parallelized (because I track the CPU usage obsessively). So there's obviously a huge gap between the concurrency primitives afforded by the hardware and those used by the software. Still, I think that they will meet in the middle, and it'll be something less "incremental" than multicore, which is just more-of-the-same.

[0] http://www.vpri.org/pdf/rn2008001_worlds.pdf

[1] https://www.recurse.com/blog/55-paper-of-the-week-worlds-con...

[2] https://twitter.com/swannodette/status/421347385915498496


Exactly. If the world had standardised on something like the Reduceron [1] instead, what we currently consider "low-level" languages would probably look rather alien.

[1] https://www.cs.york.ac.uk/fp/reduceron/


> There is no "structured programming" with spitbol. No curly braces. Gotos get the job done. Personally, I do not mind gotos. It feels closer to the reality of how a computer operates.

That's because it is closer, as I'm sure you know, since you stated your fondness for assembly languages. I even like them for specific, limited tasks (advanced loop control). That said, I think preferring them over more "modern" constructs such as if/while/for is sort of like disparaging all those new gas powered carriages, because you can get around just fine with your horse to power your carriage, thankyouverymuch. There are very good reasons to approach most uses of goto with skepticism.


I don't think that's a good metaphor. There are no actual horses inside any gasoline (or electric) engine. But there are gotos behind many (though not all) of these modern constructs...


There's actually a lot more implied by the metaphor than just goto and constructs built upon it. It's about the art and science of programming, and advancements in the field. I specifically didn't say car or automobile because I wanted to evoke the feeling that the "new" thing being shunned is actually itself far behind the current state of the art. For loops and if blocks aren't very new and shiny either. You know what is (for some relative value of "new" that includes coming back into prominence or finally gaining some traction)? Static code analysis. Typing concepts beyond what C offered. IDEs and tooling infrastructures to assist development. Languages that support formal proofs.

Goto is essential, it's the glue that holds the instruction set together. That said, we must not fetishize it, just as we must not fetishize items of the past that are largely superseded by what they helped create. To do so slows us down, and we fail to achieve what we otherwise could. We must not forget them either, they have their places, and to do so would also slow us down.


> But there are gotos behind many (though not all) of these modern constructs...

I'd argue that e.g. an x86 LOOP instruction is far more equivalent to a do/while loop than a goto. Most of the jump instructions I see in my disassembly aren't unconditional like goto is - if anything, car engines are closer to horses in what they accomplish than, say, jnz is to goto! Even jmp certainly doesn't used named labels, as any goto worth it's salt will use - instead you'll see absolute or relative address offsets.

>> Personally, I do not mind gotos. It feels closer to the reality of how a computer operates.

There's a time and place to get close to the hardware, but I've never felt that goto got me meaningfully closer. Of course, my first and primary exposure to GOTO was in BASIC - where it was interpreted.

You want to get close to the hardware? Play with intrinsics. Open up the disassembly and poke at it with a profiler. Find out if your algorithm manages to execute in mostly L1 cache, or if it's spilling all the way out into RAM fetches. Figure out where you need to prefetch, where you're stalling, where your cycles are being spent. Diagnose and fix some false sharing. Write your own (dis)assembler - did you know there's typically no actual nop instruction? You simply emit e.g. xchg eax, eax, which happens to do nothing of note, and label it "nop" for clarity.

IMO, you'll have more time to do these things when embracing the advantages that structured programming can provide. Of course, I may be speaking to the choir, at least on that last point.


NOP is most certainly a NOP on modern x86 CPUs. Yes, the encoding matches what would be XCHG EAX,EAX (or AX,AX or RAX,RAX) but it hasn't been that for quite some time as it could create a pipeline stall waiting for [RE]AX to be ready for the following instruction.

As for JNE not being a GOTO, it most certainly is. It just so happens to only happen under certain circumstances (along with the other conditional jumps, and yes, that's how they are described). Compare:

    IF X <= 45 GOTO THERE
with

    CMP EAX,45
    JLE THERE
Not much of a difference if you ask me. Also, the LOOP instruction is more of a FOR loop than a DO/WHILE, as the ECX register is decremented as part of the instruction.

And let me assure you, when writing assembly, you almost always use labels. A disassembly will show you the absolute/relative address because that's all it has to go by.


Gotos in spitbol can be conditional or unconditional.


I have a question: In his analogy, what represents the hardware and what represents the software? Wouldn't the change from horses to combustion engine be a change in hardware? And software might be represented by something like the reins or a gas pedal?


On both sides it's technology and advancement of the status quo. More explicitly, it's programming and personal transportation.

Oh, and my rant wasn't aimed at you, per-se, but the statement about goto which I expanded in isolation to a fictional point of view. That point of view may or may not have any relation to how you feel about programming and goto, I have no idea.


Gotos get the job done; so do NAND gates and flip-flops.


Gotos were shot down more than 40 years ago and they have never really made a comeback since. They are still used for error handling in the kernel, I've seen.


And error handling in many C applications, since in the absence of exceptions they're very useful for doing cleanup.


Exceptions is just a fancy word for gotos.


Just like functions are just a fancy word for gotos?


Functions are a fancy word for gosubs!


Java employs a limited Goto in the form of labelled continue statements. C# includes an explicit Goto statement for breaking out of loops.

Goto is definitely still out there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: