Hacker News new | comments | show | ask | jobs | submit login
Yalo – Lisp OS running on bare metal x86-64 hardware (github.com)
192 points by agumonkey on Jan 26, 2014 | hide | past | web | favorite | 39 comments



This seems like a cool start for a project but it may be a bit premature to share it and call it a "Lisp OS running on x86_64", because at this point it doesn't run any lisp code or run on x86_64.

Basically all it is at this point is a stage 1 bootloader in 16 bit real mode that prints out "REPL>". It does detect the CPU type and verify that it can enter long mode (64 bit mode) but never actually enters long mode.

I find it awkward that there's so much hype on HN and twitter and tons of people starring and forking it but it is rather evident that not many people actually read the code and see what it does. It seems like this project has got a few recent commits, though (after a hiatus of a few years) so there might be some progress happening.

However, what is very cool about this project is the way that it is written, basically Lisp macros that implement a single pass x86_64 assembler.

Let's hope that this project gets the love and the attention it needs to actually grow into a proper Lisp OS.


D-wav quantum prototype is running a full Lisp OS. I want to work there just to actually witness a working lisp machine


I'm the original author (not OP), and I'm quite surprised that the project was starred by so many people in github and discussed in HN. The project was originally hosted in Google Code, and I decided to migrate to github last night.

As many of your have already commented: the project is only at very very initial stage, with an assembler written in Common Lisp, and a 16 bit bootloader. Currently I can only describe it as a toy.


Thanks for an interesting project! I have also written some bare metal OS projects before and I have a few suggestions for you.

First of all, I recommend using a multiboot capable bootloader (GRUB is one, QEMU and Bochs have built in multiboot bootloaders). This will allow you to skip some of the arcade initialization code required. In particular, loading the code from a floppy is painful because most PCs these days don't have a floppy drive. So while you'll be able to test the code on emulators, running on real hardware require getting a floppy drive. With GRUB, all you need to do is put the binary image in your /boot and you can boot on it with real hardware.

When using multiboot, the bootloader will set up 32 bit identity mapped protected for you.

Many bootloaders also support setting up a graphics mode, which will save a lot of trouble, because you'll have to drop back to 16 bit real mode to access the BIOS interrupts to change the graphics mode.

I do understand the appeal of not using a bootloader and doing things from scratch but when it comes to getting things done, that's just not a good way to go.

I also recommend staying in 32 bit mode, it's just a lot easier. In 64 bit mode you'll need to deal with paging (no identity mapping is available) and this becomes a burden very early in the project when you will need to access memory-mapped i/o devices (in particular the APIC interrupt controller) and Intel Multiprocessor configuration, which are scattered around the physical memory address space.

When you have something that actually works, it is easier to go to 64 bit mode than it is to start from scratch with 64 bit long mode. I found this out the hard way and my hobby OS project[1] stalled because it was just too much work to get anything done in long mode.

In addition, I would recommend emitting debuggable elf files instead of flat binary files. Using the QEMU/Bochs monitor will take care of your most basic debugging needs but to go further, remote debugging with GDB is going to be very helpful.

Best of luck for your project!

[1] https://github.com/rikusalminen/danjeros


Thanks a lot for your advice and sharing your experience!


That's the kind of projects I think about in the shower or when I'm procrastinating.

I need to hack on X. - Current editors suck, I should create an editor. - Current languages suck, I should create a language, and I'll build my editor in it. - Current OS suck, I'll write the OS in my editor using my language.

Meh, maybe another day, 2 hours later X is hacked in python using vim.


I did this.

I built the hardware (Z80 SBC), wrote the OS (tiny single tasking Unix like UZI180), started to write an editor (like ed). Then I got distracted with something shiny and abandoned it.

Think I blew 200 hours on that and don't even have the source now.


next time build first a source control system (and/or a snapshotting filesystem)


My host machine was MSDOS and this was before I was aware of things like VCS'. I still have the ROM and board so I could possibly disassemble it.

The actual loss was caused by a disk failure and an untested backup system (my bad)


I've been doing this, working on the OS part now, eventually I'd like to write a small vi clone, a c compiler, x86 assembler and then maybe do an x86 clone in verilog/x86 emulator.


This seems so horribly complex and impossible. Amazed at the Gurus out there who can just decide to do this.


This is incredibly similar to an assembler I wrote in Scheme, almost a decade ago. I was using Bochs and even unit testing with NASM.

I think people are going to be disappointed, if they can even get this to run in Bochs. About the only "Lisp OS" thing here is the REPL, which looks like it's incomplete anyway. Movitz is ahead of it by a bit, since it at least has, IIRC, a working compiler and possibly a driver or two.

But nothing we have today is anywhere close to what people think of, when they think "Lisp OS". However, I've used Genera, and I'd really caution people about getting this romanticized view of it. There are still many good ideas there, but Linux and OS X today are not the Unix of 1980. We've come a long way.


We've come a long way, but it's a bit tragic so many of our eggs are in what amounts to two baskets. One is the "Unix-ish" basket, where we have Linux, *BSDs, OSX, Haiku, and even Hurd. The other is the heirs to VMS, where we have Windows, which is completely opaque to research, and ReactOS.

It's nice to have people exploring OS ideas in other spaces as well.


That's true. It would be nice for a true alternative. I still think about it from time-to-time. My Scheme project was to be a SchemeOS. It wasn't even the sheer amount of work that got to me, and ultimately did it in. It was the realization that I'd have to go it completely alone.

I was a lurker for years when the TUNES project was going on. I won't say a new "from scratch" OS is impossible, but I will say that TUNES ended up with a death by a thousand bike sheds. Linux worked thanks to POSIX and other *ix systems in existence. The groundwork was there. There was a model. Anyone that wanted to contribute knew what a Unix looked like and how a Unix functioned.

Not so with any LispOS. Or TUNES. It's a new thing, wholly different from anything that existed before. The few people that had access to Genera weren't going to want to replicate that for reasons that become apparent when using it (it's quite outmoded in many ways). And since there was no Genera community and nothing to really salvage there, it made more sense to start over anyway.

With all that said, I believe the true appeal of a LispOS is that it's homogeneous. It's Lisp from top-to-bottom. Which means it's great for programming at any level, great for productivity. But not so great for people that love variety in languages.



I saw this from xach's tweet https://twitter.com/xach/status/427490779326451712

suggesting movitz isn't moving anymore, but maybe I'm reading it wrong.


Do you know how to try running Movitz? Many links on that page are broken.


Excitedly tried it out, but I get this error on both 32-bit and 64-bit ubuntu:

  $ git clone https://github.com/whily/yalo
  $ cd yalo/cc
  $ ./lnasdf
  $ sbcl --version
  SBCL 1.0.58
  $ sbcl
  * (require 'asdf)
  * (asdf:oos 'asdf:load-op 'cc)
  * (cc:write-kernel "floppy.img")
  #<THREAD "main thread" RUNNING {AB4FD11}>:
    match-instruction: unsupported instruction (JC NO-BGA-ERROR)
Also, I'd love to hear more about Ink. I'm not sure where it is in such a tiny codebase.


From Github: "The system programming language is Ink, a new Lisp dialect which combines the elegance of Scheme and powerfulness of Common Lisp."

And no link! It's like eating chocolate in front of a child and not giving him any. Me want.


This is description of every Lisp dialect I've ever heard of.


Somewhat related, the vpri work on a os (for among other things the oltp) -- written in a cascade of DSLs:

http://vpri.org/fonc_wiki/index.php/XO_Hacking


Yes, an Allan Kay "child". :)


It looks like it's just written in assembly, but with a Lisp syntax.


That's a common idea in low-level Lisp programming. Note that you still get macros, so it's a lot nicer than writing straight-up assembly language.


Not entirely, although a Common Lisp function wrapper to NASM syntax does seem to play a big role.


Does any Lisp code actually run in the OS itself, or is it just involved in building the OS image?


I don't understand, everything is assembly/machine code when it executes.


To be more clear: is there any compiled Lisp code, or is it just manually written assembly?


I wouldn't raise my hopes too much.

Of course he's creating his own LISP dialect, no doubt with the purpose of "fixing" few of the myriad of artificial percieved non-problems of Common Lisp and its ancestry. ( * )

I'll be one happy nerd if this turns out to be anything but yet another crock of ---- layered on top of an idiotic architecture, but given the vanishing fraction of sane people amongst todays "hackers" (I can't make those quotes big enough), I'm very much in doubt. Show of hands, everyone: who actually believes that this thing is being designed from the beginning with the purpose of total inspectability and modifiability?

( * ) I'm not saying that Common Lisp doesn't have problems; for instance, the interactivity and debugging support of todays Common Lisp implementations is pathetic compared to the Genera and friends. I'm saying that the actual problems are so far beyond the casual idiot's horizon that he tends to make up problems he can actually understand (parentheses! archaic naming! standard too big! blah blah blah blah blah!).


I have been looking for something like this. Original author: do you have any screen shots to share of Yalo running in VirtualBox?


I am sure I am missing something, but I really want to know how to run it on a VM.



Finally, we can have the technology they had in the 70s. I hope we reach 80s tech by 2024.


In some respects, the 1990s and 2000s were a massive regression over the 70s and 80s. For some of us, at least.

In the 80s, computer users were not segregated as producers and consumers as strongly as it happened with the consolidation of closed systems. They came with schematics, they openly explained every internal function and design.

Under the guise of simplification and layers of abstraction, most of the users and even developers have been progressively denied control over their computing devices. Smalltalk and LISP based systems proposed a leaky abstraction model were every user could delve as deep as he wanted into the system, which enforces a degree of openness that is very inconvenient for monetisation of the system itself. Which is why they were suppressed (by simply allocating resources on producer-consumer models, easy to sell and most importantly easier to monetise as the provider becomes the master).

Note that this culture is extremely hard to eradicate as people stick to what they know and a total shift is unlikely to meet with collaboration from the companies that profit from said control.


See, in 1970s this was the stuff of top-notch research labs and expensive computing centers. Same applies to multi-core multi-megaflops CPUs, advanced multi-user OSes that run multiple VMs while networking with computers on the other side of the globe. It was there in 1970s, too. Only today it's something that can run on your laptop, or a smaller device like the Intel Edison, for a few hundred dollars.

Can you notice the difference?

Another area where a massive copy-catting of 1960s and 1970s is happening today is space launch tech. And again, where 1960s had to spend a significant portion of national budgets, today's private firms do pretty well with a fraction of a valuation of a picture-sharing service, and turning a profit in the process.

The goals did not change all that much, but the means are now much more accessible.


But by the mid 1980's lots of stuff most people don't have today could run on home computers in the few-hundred-dollars range. I'm still missing stuff I had on my Amiga 500.

Sure, there's lots and lots of stuff I can do on my current computers I wouldn't have a chance in hell of doing on my A500.

Yet I'm finding myself seriously considering running UAE (Amiga emulator) or AROS (Amiga OS "reimplementation") in a VM to run Amiga mail apps (why could I handle thousands of messages from BBS's on my A500 with 1MB RAM, while Thunderbird chokes on my (smaller) inbox?) and my favourite editor (FrexxEd).

I keep cringing when I see hacky re-implementations of stuff that was done so nicely on the Amiga. E.g. want transparent compression in your applications? Implement support for XPK and your users can just drop in libraries implementing whatever formats they want. Rather than the Linux way of everyone implementing a varying, small, incomplete subset of the available formats, making you resort to a set of binaries all with inconsistent command line options.

Similarly, how every app that needs image loading support ends up with a small subset of common formats, rather than the AmigaOS approach of supporting datatypes, allowing users to drop in a library for whatever format they want, and instantly having support for loading the new format in every application.

As a result, apps that haven't seen updates in a couple of decades, still supports the latest formats.


When I read Xerox PARC papers it always amazes me at was already possible, during the days I had a ZX Spectrum to play with.

The hypertext and embedding capabilities of their GUIs, the use of system programming languages with automatic memory management, among many other inventions.

Sadly, for many reasons the historians can talk about, the industry ended up going the AT&T way instead of Xerox PARC way.


Bret Vitor's talk is great, for those that still don't know it,

http://vimeo.com/71278954

I like to dig computer history, and when I look everything that was accomplished at Xerox and other places and what we got. The industry really moves at snail pace in some areas.

At Xerox the GUI systems were developed in Lisp, Smalltalk and Mesa/Cedar, the later being a systems programming language with automatic memory management.

Those systems had networking, GUIs, multiuser capabilities. Object embedding capabilities, the precursors of IDE and REPL environments.

Today we are still trying to re-invent those systems. How further would have we managed to be today, if those systems had been picked up as starting point.


Unlikely as everyone will still be busy porting all the stuff to Javascript.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: