Hacker Newsnew | past | comments | ask | show | jobs | submit | andreamonaco's commentslogin

I fully agree, in fact I'm writing a game in C too, it's https://github.com/andreamonaco/zombieland. Plus I have at least another idea for a game project in mind with the same language.

Hello, I'm writing an implementation of the Common Lisp language that uses an enhanced reference counting algorithm (that I've taken from literature) that detects and handles cycles. Performance seems okay, though I still haven't tried large programs.

https://savannah.nongnu.org/p/alisp


A somewhat different approach was recently proposed here: https://news.ycombinator.com/item?id=44319427 but it seems to have non-trivial overhead. (Still very much worthwhile, given the potential advantages of deterministic cycle collection.) The paper you reference is quite a bit older so it would of course be interesting to do a proper comparison.


The talk for this paper came up on YouTube just the other day: https://www.youtube.com/watch?v=GwXjydSQjD8


I'll look at that. About performance: people in practice have always favored GC, so I think there's a lot to be discovered in optimization of reference counting algorithms, including concurrent traversal (which is easier because each node has local info in the form of refcounts and flags) and maybe detection of problematic worse-case graphs


Naive ref counting (RC) and tracing GC are very different, but they start looking more and more similar the more you optimize them. Adding cycle collection to RC means adding some tracing. Adding deferred/batched destruction to RC is similar to making a tracing GC incremental. Saturated ref counts (or otherwise avoiding updates) are similar to creating an older generation in a tracing GC. Barriers in a tracing GC (for incremental/generational/concurrent collection) are similar to the ref count updates when mutating RC objects. RC cycle collection time is heavily determined by how much of the graph is traced through from "suspected" roots, so it can be optimized by tracing known-live stuff and removing it from consideration.

But some significant performance-relevant differences remain. RC's cycle collection tends to take time proportional to the amount of dead stuff. Tracing GC tends to take time proportional to the amount of live stuff. (Both use optimizations that weaken the connection, but they still show their origins.)


Many people still use C, I do for sure


Are you using it for embedded systems or regular software?


Regular software: a lisp implementation (https://savannah.nongnu.org/p/alisp) and a little online game (https://github.com/andreamonaco/zombieland)


Wow bro, that's cool, I especially liked the game, it's cool, and I'm currently developing a utility for viewing photos right in the terminal.

https://github.com/Ferki-git-creator/phono-in-terminal-image...

It's not ready yet, but can you please support it with a star like I did for you?


Kudos to you! I'm writing a game (a little MMO) the same way, the technical and creative freedom is priceless.


I'd say the latter, since I felt the need for debugging tools similar to other languages like C. Watchpoints for example are entirely non-existent in current implementations, as far as I know


Also, my Patreon page (https://www.patreon.com/andreamonaco) has behind-the-scenes posts, some even in the free tier


Yeah, the goal is first bytecode compilation and then full


I don't like sites with heavy Javascript, especially if it's non-free. (Though recently I started using Github for a different project.)

Savannah is very basic, perhaps too much, but it's okay for my project.


I'm not sure I understand. What I do in my project is very common practice: generated files (like the configure script) are not part of the repository, but they are put in released tarballs


It's a bad practice commonly found in GNU projects, which results in an overcomplicated, inconvenient and unstable build system that will discourage future collaboration. Many of these projects are very old, two decades or more; they are living with ancient decisions made in a world of dozens of incompatible Unix forks.

One thing to do instead is to just write a ./configure script which detects what you need. In other words, be compatible at the invocation level. Make sure this is checked into the repo Anyone checking out any commit runs that, and that's it.

Someone who makes a tarball using git, out of a tagged release commit, should have a "release tarball".

A recent HN submission shows a ./configure system made from scratch using makefiles, which parallelizes the tests. That could be a good starting point for a C on Linux project today.


> A recent HN submission shows a ./configure system made from scratch using makefiles, which parallelizes the tests. That could be a good starting point for a C on Linux project today.

Not everything is C, or GNU/Linux. The example also misses much of the basic functionality that makes GNU autotools amazing.

The major benefit of GNU autotools is that it works well, specially for new platforms and cross compilation. If all you care about is your own system, a simple Makefile will do just fine. And with GNU autotools you can also pick to just use GNU autoconf .. or just GNU automake.

Having generated files in the release tarball is a good practise, why should users have to install a bunch of extra tools just to get PDF of the manual or other non-system specific files? It is not just build scripts all over the place, installing TeX Live just to get a PDF manual of something is super annoying.

Writing your own ./configure that works remotely as something users would expect is non-trivial, and complicated -- we did that 30 years ago before GNU autoconf. There is a reason why we stopped doing that ...

I'd go so far to think that GNU autotools is the most sensible build system out there...


> Not everything is C,

AutoTools are squarely oriented toward C, though.

If you're not using C or C++, you're probably not using AutoTools.

(I think I might have seen someone's purely Python or shell script project using Autoconf, but that was just ridiculously unnecessary.)

> Having generated files in the release tarball is a good practise

Without a doubt, it is a good idea to ship certain generated files in a source code distribution. For instance, if we ship a generated y.tab.c file, the user doesn't have to have a Yacc program (let alone the exact version of the one we would like them to use).

What's not good practice is that anything is different in the release tarball compared to the git commit it was made from.

"Release tarball" itself a configuration management antipattern. We are a good two decades past tarballs now. A tarball should only be a convenience for people who don't need a git clone.

Every generated thing that a downstream user might need should be in version control, and updated whenever its prerequisites change. This is a special exception to the general rule that only primary objects should be in version control. Secondary objects for which downstream users don't have generation tools should also be in version control.


This is true and standard for (really) old projects, and dealing with this scripts and their problems used to be the bane of my existence 10 years ago. But I can't say I've encountered any such projects in the last 5 or so years.

Either they use a modern programming language (which typically has an included build system, like rust's cargo or simply go build) of they use simple Makefiles. For C/C++ codebases, it seems like CMake has become the dominant build system.

All of these are typically better than what GNU autoconf offers, with modern modern features and equally or better flexibility to deal with differences between operating systems, distributions, and/or optional or alternative libraries.

I don't really see why anyone would pick autoconf for a modern project.


Cmake is by a wide margin the worst build tool I've used. That covers at least autoconf, gmake, nmake, scons, waf, tup, visual studio, the boost thing, bash scripts and lua scripts. Even the hand edited xml insanity of visual studio caused negligible grief compared to cmake.


I strongly concur. Cmake is incompetently designed and implemented. The authors had no idea how to make a build language, but didn't let it stop them.


Having used both autoconf and cmake, I have a strong preference for autoconf (plus hand written makefiles; never been able to get into automake). It's just easier to use for me, especially when it comes to writing tests for supported functions and adding optional features you want to enable or disable via configure script options.


In my opinion, automake is the weakest part of the autotools chain. Look at this section of the manual for example https://www.gnu.org/software/automake/manual/automake.html#G...: it says that automake doesn't recognize many Gnu Make extension, and can get confused even by weird whitespace...


CMake is really more of a C++ crowd thing, it never won the mindshare with C.

> I don't really see why anyone would pick autoconf for a modern project.

If you build for your system only and never ever plan to cross compile by all means go with static makefile.


A good way to make sure your project won't cross compile is to use Autoconf. Rampant use of Autoconf is the main reason distros gave up on cross compiling and started using QEMU. Developers who use Autoconf and who don't know what cross-compiling is will not end up with a cleanly cross-compiling project that downstream packagers don't have to patch into submission.

Most of my disdain for Autoconf was formed when I worked at a company where I developed a embedded Linux distro from scratch. I cross-compiled everything. Most of the crap I had to fight with was Autoconf projects. I was having to do things like export various ac_cv_... internal variables that nobody should know about, and patching configure scripts themselves. Fast forward a few years and I see a QEMU everywhere for "cross" builds.

The rest of my disdain comes from having worked with the internals of various GNU programs. To bootstrap their build systems from a repository checkout (not a release tarball) you have to follow their specific instructions. Of course you must have the Autotools installed. But there are multiple versions, and they generate different code. For each program you have to have the right version that it wants. If you have to do a git bisect, older commits may need an older version of the Autotools. Bootstrap from the configure system from scratch, the result of which is the privilege to now run configure from scratch. It's simply insane.

You learn things like to touch certain files in a certain order to prevent a reconfigure that has about a 50% chance of working.

Let's not even going to libtool.

The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.

What I want is something different. I want a user to be able to use any platform where the program works to obtain a checkout if exactly what is in git, and be able to make a patch to the configuration stuff, test it and send upstream without installing anything that is not required for just building the program for use.


Autoconf and automake has the best support for cross-compiling there is, everything else is a poor imitation. At least from the perspective of the folks doing Debian's cross-build stuff. With Debian's multi-arch policy, cross-toolchain packages and dpkg-dev/debhelper support for driving common cross-compiling options, plus fixing a ton of edge cases, IIRC more than 50% of Debian packages are now cross-compilable without qemu. Often they are bit-for-bit identical to the native compilation too.

https://wiki.debian.org/CrossCompiling https://crossqa.debian.net/


A build system has great support for cross compilation when downstream package maintainers don't have to lift a finger to make it work, even though the upstream developer has not even tried cross compiling.


Its inevitable that upstream devs will use even the best build system incorrectly, resulting in downstream needing to make changes to fix things. This is the reality of what happens with both autotools and every other build system that supports cross compiling.


> A good way to make sure your project won't cross compile is to use Autoconf.

Yeah well this is not quite true. Most embedded distros leverage autotools heavily. In Yocto you just specify autotools as the package class for the recipe and in most cases it will pull, cross compile and package the piece of software for you with no intervention.

The tools are clearly antiquated, written in a questionable taste and 80% of the cases they solve are no longer relevant. They are still very useful for the rest.


Yocto builds entire packages twice, for build machine and target. It patches packages, doing whatever it takes. It makes use of QEMU also.

Sure, Autotools can do this and that ... given a significantly large and busy crew of downstream packagers who compensate for this and that in their distros.

It's a lame horse that needs to be buried.

The main thing that's wrong with Autotools perhaps is that it is shielded from fixes. Autotools makes a cockery of a project's configuration, and then downstreams concentrate on fixing the cockery to get that project working right. The fixes do not go back to Autotools!

There is already very little feedback between distros and upstream projects. Most of the time distros fix things silently, get things working and never contact upstream. (I've often learned of build issues in my projects by browsing downstream issues and discussions. They do not contact you! I would patch their problem. Then check a month later after a release and yup, "patch no longer applies; upstream fixed this; issue closed").

And now Autotools is one more hop behind upstream! Most people causing problems with Autotools will never be contacted, and those that are will never fix/improve or at least report anything in Autotools. If they do, they will probably just be told they are using Autotools wrong, go away.

Thus the Autotools project is blissfully oblivious to the idea that it might be a problem. Like an elephant in the room that is lying on your sofa, sipping a margarita and watching TV.


> Yocto builds entire packages twice, for build machine and target.

Are you sure? I was certain that if the package isn't necessary for build process it is built for target only. Some of the packages on our products are in fact impossible to build for the host which is a pretty good hint.


I didn't mean that it builds all packages twice! Clearly, it does not.


> A good way to make sure your project won't cross compile is to use Autoconf. Rampant use of Autoconf is the main reason distros gave up on cross compiling and started using QEMU. Developers who use Autoconf and who don't know what cross-compiling is will not end up with a cleanly cross-compiling project that downstream packagers don't have to patch into submission.

Cross compilation for distributions is a mess, but it is because of a wide proliferation of build systems, not because of the GNU autotools -- which have probably the most sane way of doing cross compilation out there. E.g., distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...

> The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.

Nothing could be further from the truth, GNU autoconf started as a bunch of shared ./configure scripts so that programs COULD build on non-GNU system. It is also why GNU autoconf and GNU automake go such far lengths in supporting cross compilation to the point where you can build a compiler that targets one system, runs on another, and was build on a third (Canadian cross compile).


> distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...

1. Most of the time ./configure won't properly support --host isn't because someone wrote their own thing, but because it's Autoconf.

2. --host is only for compilers and compiler-like tools, or programs which build their own compiler or code generation tools for their own build purposes, which have to run on the build machine rather than the target (or in some cases both). Most programs don't need any indications related to cross compiling because their build systems only build for the target. If a program that needs to build for the build machine has its own conventions for specifying the build machine and target machine toolchains, and those conventions work, that is laudable. Deviating from the conventions dictated by a pile of crap that we should stop using isn't the same thing as "does not work".


A few of my personal projects cross compile via static makefile. Is there something wrong with that?


If you're not writing something which compiles its own tools which are then used for the rest of the build, or is not a compiled programming language, all you have to do is respect the CC, CFLAGS, LDFLAGS and LDLIBS variables coming from the distro. Then you will use the correct cross toolchain and libs.

If you need to compile programs that run on the build machine, you should have a ./configure script which allows a host CC and target CC to be specified, and use them accordingly. Even if you deviate a bit from what others are doing, if it is clearly documented and working, the downstream package maintainer can handle it.


Ideally I'd reach ANSI compliance, first with a bytecode compiler and then with a full one


Is there some important shortcoming of all the existing Common Lisp implementations that you would like to correct?


Awaiting answers. Seems stepping is one.

Btw, I stick to sbcl as I used vim and so far the script here works for me. Might try this when back to do lisp.

https://susam.net/lisp-in-vim.html


Yeah, advanced debugging features like watchpoints are very important to me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: