Hacker Newsnew | past | comments | ask | show | jobs | submit | hanez's commentslogin

What is Fun?

Fun is an experiment, just for fun, but Fun works!

Fun is a highly strict programming language, but also highly simple. It looks like Python (My favorite language), but there are differences.

Influenced by Bash, C, Lua, PHP, Python, and a little Rust.

Fun is written in C(C99) and many libs are implemented in pure Fun. Smartcard(PCSC) is available as extension written in C while sha256 or sha512 are implemented in Fun.

Would love to get some feedback (hanez@fun-lang.xyz), but please read the content on https://fun-lang.xyz first.

Have fun! hanez


I need to say, that issues can be opened at https://github.com/hanez/fun/issues, but I do not develop Fun on GitHub. Pull requests will not be accepted there. Send patches or ask me for an account on https://git.xw3.org... ;)

How are you handling shared state with the new concurrency primitives? Since it is embeddable I am curious if you went with a global lock approach like Python or isolated states similar to Lua. Managing thread safety while keeping the C API simple is usually the hardest part of these implementations so I would love to hear more about the architectural choices there.

I added a section to https://git.xw3.org/fun/fun/src/branch/main/docs/internals.m... that describes this more detailed. I copied from other documents some seconds ago and I am not sure if this all is 100% correct actually. I will check this and will update the file if I find some wrong parts...

Isolated state is the sensible choice for embedding; fighting a GIL from the host language is miserable. I guess those shared-memory primitives are the escape hatch for when message passing serialization gets too expensive? The documentation seems consistent, though I'd be curious if the per-VM garbage collectors need to stop-the-world to scan those shared regions.

• Yes: isolated state (one VM per embed) is usually the right default. It avoids global locks (e.g., a GIL), makes scheduling simpler, and keeps failure/lifetime boundaries crisp.

• Yes: shared‑memory primitives are the “escape hatch” for when copying/serialization is too expensive, but they should be very carefully constrained.

• GC: If you design shared regions to be untraced (no VM heap pointers inside), each VM’s GC can remain independent and never stop other VMs. If you allow GC’d objects to be shared across VMs, you either need a global safepoint (stop‑the‑world across participants) or a more complex concurrent/barriered scheme.

I added some more information about this to the internals document.


Thanks for the link. I will take a look at the internals. The trade-off between global locks and isolated states is usually the most critical decision for embeddable languages so I am curious to see how the implementation handles it.

I chose isolated state (like Lua) rather than a single global lock (like Python’s GIL). Each VM has its own heap, scheduler, and garbage collection. There are no cross-VM pointers. Concurrency and data exchange happen via message passing and a few carefully scoped shared-memory primitives for high‑throughput use cases. This keeps the C API simple, predictable, and safe to embed in multi‑threaded hosts.

Isolated state seems like the right call. I am curious how you implemented the shared memory primitives though. I spent a while trying to get zero-copy buffer sharing right in a previous project and usually ended up complicating the host API significantly to guarantee safety. Are you using reference counting or some kind of ownership transfer model there?

• We default to isolates for safety and scaling.

• Zero‑copy sharing is done with fun_shared_buffer, an off‑heap, GC‑untracked, pointer‑free block that’s immutable from the VM’s point of view.

• Lifetime is managed with plain reference counting (retain/release).

• For hot paths, we also support an adoption (ownership‑transfer) pattern during message passing so the sender can drop its ref without copying.


Isolated state is definitely the right call. I am curious how you implemented the shared memory primitives though. Usually that is where the complexity creeps back in if you want to avoid global locks. How do you expose that without forcing the host to manage its own synchronization?

We don’t expose shared mutability to VMs. The trick is: publish‑as‑immutable plus adoption via ports. Ports/queues do the synchronization; fun_shared_buffer is off‑heap and refcounted with atomic ops. The host doesn’t need to lock anything for the common paths.

Looks interesting... ;) Thank You!

You're welcome! Would love to get some feedback!

What they want to do is to implement the concept of a unikernel (https://en.wikipedia.org/wiki/Unikernel). That's a very different approach then running a complete OS with their software stack on top. Take a look at the two links in my other comment.


Sounds a little bit crazy to me at first but when thinking about it, it is a nice idea.

May you should take a look at Rump Kernels and build your stuff on top. Then you do not need to implement the OS stuff - It's done already. May I am wrong but it seems to be a similiar idea but the following project is currently at the OS level only but some applications like ngnix are working already. I was very confused when I first read about Rump Kernels and after reading a while and watching some conference talks a lot of stuff made sense to me even if I do not understand in detail what they are doing.

http://rumpkernel.org/

https://github.com/rumpkernel


This seems more like MirageOS/Ling than a rumpkernel. From what I remember, rumpkernels are more general, at the cost of being less "unikernel-y" (i.e. lean and fast). I agree that it would be easier and faster to use a rumpkernel if the intention is use in production soon.

I can't tell if this is a research project, or intended to be production quality at some point.


>I can't tell if this is a research project, or intended to be production quality at some point.

Found this on their website [1]:

>IncludeOS is the result of a research project at Oslo and Akershus University College of Applied Science (hioa.no)

>IncludeOS is not production ready - but we're working hard to become so.

[1]: http://www.includeos.org/


Thanks, I didn't catch that. I'll have to make some time to play with this, especially once it gets further along.


I believe to get this production ready will take some years.

From what I understand in the projects FAQ page they want to implement the concept of a Unikernel and this is what the Rump Kernel at http://rumpkernel.org/ is intending too. May I am wrong because I am not to deep into it.

If it is a research project then they should go on... It looks like it is because it is beeing developed at a university in Oslo, Norway. http://www.hioa.no/eng/


No, a rump kernel is not an unikernel, check the FAQ linked from http://rumpkernel.org/

However, you can use rump kernels as a major component of a unikernel implementation. A rump kernel provides environment-agnostic drivers, meaning you can integrate them pretty much anywhere.

Now, what is a unikernel? From my perspective it's essentially: 1) application 2) config/orchestration 3) drivers 4) nibbly "OS" bits

So from the bottom, the nibbly bits include things such as bootstrap, interrupts, thread scheduler, etc. It's quite straightforward code, and a lot simpler that the counterpart you'd find e.g. in Linux. But you can't do much anything useful with the OS when that part is written.

Drivers are difficult because you need so many of them for the OS to be able to do much anything useful, and some drivers require incredible amounts of effort to make them real-world bug compatible. Just consider a TCP/IP stack -- you can write one from scratch in a weekend, but the result won't work on the internet for years. Then you may need to pile on a firewall, IPv6, IPsec, .... A rump kernel will provide componentized drivers for free. The policy of if you use those drivers in a unikernel or microkernel or whateverkernel is up to you, but I guess here we can assume unikernels.

The config/orchestration bits are actually quite an interesting topic currently, IMHO, at lot of opportunities to make great discoveries. Also, a lot of opportunities to use the rope in the wrong way.

The applications depend on what sort of interfaces your unikernel offers. If it offers a POSIX'y interface, you can run existing applications, otherwise you need to develop them for the unikernel.

Now putting rump kernels and unikernels together: the nibbly bits are straightforward, the drivers come for free via rump kernels, and those drivers provide POSIX syscall handlers, so POSIX'y applications just work. That leaves the config/orchestration stuff on the table. There's a rumpkernel-based unikernel called Rumprun available from repo.rumpkernel.org. It's essentially about solving the config/orchestration problems. Due to the rump kernel route, the other problems were already solved in a way which can be considered "good enough" for our purposes.

Hope that clarified the difference between rump kernels and unikernels.

(edit: minor formatting fix)


That clarified the difference! Thank you.

I think I mixed rump kernel and rumprun. I saw a talk long time ago and am not very deep into that nowadays.

Therefore, double thanks!


There is actually an active unikernel project called IncludeOS (http://www.includeos.org/), which could be a good place to start.


Exactly this is what the original post is about... ;)


Maybe it's a bot designed to promote IncludeOS. Gives a stock response to anything on the Internet that has IncludeOS in its name. That would make more sense.


Urgh, completely thought that the name was clobbered by a second project. Silly me.


Uh, he ist not a bot, he is not a bot, he is not a bot... :D


When running 'look' to get all comments in a file I only get the first found line in the ouput. Can someone explain this?

look "#" .zshrc

There are really a lot more comments (lines starting with '#') in my .zshrc then the first line.


From the manpage:

> As look performs a binary search, the lines in file must be sorted...

I assume you didn't sort your source file first.


When sorting the file first I get no output anymore:

sort ~/.zshrc -o /tmp/.zshrc

look "#" /tmp/.zshrc

Sorry for the noise but I am confused... :|


sort(1) might not use the same ordering as look(1)

Try setting LC_COLLATE=C and export it and retry.


Uh, wow! That was it!

Thanks for this answer...


It doesn't make sense to sort a configuration file, order and context matters. Easier to just use grep '^#' (or even better, ag).


Yes, I am aware of that but I wanted to understand the usage of 'look' at this point. It was a fault to try stuff using the .zshrc file at first because I ran into misunderstanding everything. It was just the first file that came in mind to me. Since sorting a source file and then "looking" for comments makes no sense in any way, grep seems to be really the better choice.


Your .zshrc is almost certainly not a sorted file.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: