Hacker News new | past | comments | ask | show | jobs | submit login
4coder editor is now fully open source (github.com/dion-systems)
177 points by leoncaet on May 31, 2022 | hide | past | favorite | 99 comments



While I'm not a user of this editor, this seems like great news.

4coder receives significant attention via the Casey Muratori's video streams like Handmade Hero, and opening it up is sure to produce some positive growth for the development/community side of the project.

The combination of popular development livestreams and FOSS tooling utilized in those livestreams seems like it could be an important part of making FOSS development more sustainable without involving big corporate sponsors.


> opening it up is sure to produce some positive growth for the development/community side of the project

This is not a "opening up" of anything but the source code, with a permissive license. You're free to fork and change anything you want, but it won't be upstreamed ever as the author did the open sourcing because they are stopping maintenance of the project.

So this is more of a "closing down" with the gift of leaving us with the source code. And for that Allen Webster, I thank you!


If there's a demand in the community for further development then the project will be immediately forked and this for will become new upstream.


Indeed, the community could fork it, change the name and carry on development. But it would no longer be "4coder" but based on/forked from "4coder", as "4coder" would no longer receive any updates from this point on.


> the community could fork it, change the name and carry on development. But it would no longer be "4coder"

4kdcoder?


Or 8coder? And when that gets forked, 16coder....


ForkCoder? Or ForkEd...


With the wrong accent this sounds like telling a coder off…


Related:

4coder, a modern text editor based loosely on Emacs - https://news.ycombinator.com/item?id=23444912 - June 2020 (16 comments)


For anyone else (like me) who doesn't know what this is, it seems like the website is http://4coder.net


I wish the stock README.md template for every project started with:

"What the hell is ________?"


I find it bit of a peculiarity of HN that GH links, no matter how non-descriptive, are so often preferred over projects main home pages.


In this particular case, the github link makes a bit more sense, as the main topic of discussion was the fact that it's now open source.

I agree with you in general though.


Most probably because the GitHub content will outlive the project homepage, especially when you consider that 4coder is no longer maintained, and thus the source is being made available. Seems logical to me.


with smaller projects it's pretty common that HN hugs the site to death


From that page “Anticipated Next Version: 4.1.3 (beta), February 2020” so this release as open source along with the github repo being set to archive/read-only looks like an “I'm done. If a community wants to keep this alive, over to them…”. Nothing wrong with that, it is much better than complete abandonment in such a way as other people can't carry the torch even if they really want to, but it is a less positive situation than the headline suggests.


I love (not) these posts that assume we know what it is.


I love people who think everything has to be aimed for their consumption.

Some things are aimed at a group of people, and written with that in mind. Someone closing a project/stopping maintenance of something, usually write directly to the group of users of the software.

To expect everything to contain information about everything someone could not know, means everything would have to be extremely verbose. We have search engines, typing "4coder" and finding the website took me around 2 seconds, I'm sure you'd be able to do it just as fast.


I understand your perspective, but I disagree in this particular case. "Show HN" is obviously aimed at our consumption.



Hm. It was either removed, or I haven't seen the title properly.

Still, the argument is the same - a person is posting an article on a news site for the sole purpose of other people seeing (consuming) it.

I consider it basic decency to create a good presentation when you're showing something in people's news feed.


>Still, the argument is the same - a person is posting an article on a news site for the sole purpose of other people seeing (consuming) it.

>I consider it basic decency to create a good presentation when you're showing something in people's news feed.

The individual posting the article may not be the owner of the code base thus making it impossible for the poster to alter the presentation.

If the owner of the code base has not posted a link for others' "consumption," they would have conformed to your expectations.


> The individual posting the article may not be the owner of the code base

Good point, I didn't consider that.

The point of view I'm coming from is that while it's impossible to put all possible information in a README (and such level of detail would probably make it even less readable, like Whitehead and Russel's Principia Mathematica) - there are some things that are reasonable to expect on any kind of public repository (I assume it's a public repository because of the README's public-facing style - "Welcome to the 4coder code base. [...] In this readme you will find: [...]"), such as "what is the purpose of this program".

I know nobody owes anything to nobody, and such criticism might be seen as rude - since we're basically criticizing volunteers - but consider the fact that feedback is a major part of how we figure out our mistakes. If someone is posting their work online, it would be reasonable to assume that they care about their work and want to make it better. Without (constructive) criticism, they would never receive the feedback required for improving. Praise is pleasant and should be given when it's due, but criticism is also useful information.


Well, anyone can post a link to HN but only the project author can adjust the project’s README. Perhaps a better URL exists, but also perhaps not everything on HN has to be for everyone on HN. This is obviously news for people who know what 4Coder is, not necessarily meant to onboard people who don’t want to look up what 4Coder is.

Edit: grammar


> I love people who think everything has to be aimed for their consumption.

I love how people don't bother providing information that would be easy to provide, the URL to the website would be handy as here it isn't even included in the README on github, then people get all het up when others mention the lack of information.


I love people who think that they have any semblance of an idea of who will see or read their stuff. It's so pre-Gutenberg of them. ;)

You don't have to write your things FOR EVERYONE but it isn't unreasonable to suggest that folks are going to link to random projects on github and that other folks may end up on said project through unexpected means, and with no context, or with unexpected context. It probably follows that that _maybe_ spending the time to write _one sentence_ to describe WTF you're looking if you want ANYONE to consume it isn't unreasonable.


They will tell stories of you


Ah this is great! Sad that it's not getting any future updates, but on to other projects. I tried using the Mac version of 4ed a couple times, but it never got the TLC that Windows did, plus the emacs-like setup was foreign since I use vim.

The author has a great YouTube playlist[0] as well, bringing you through some basics of their preferred methodology of programming. It's been interesting to follow along and check out how he does things.

https://www.youtube.com/c/Mr4thProgramming/videos


Seems to be an emacs clone using C++ for customization instead of elisp.


From my very brief experience with it, and watching Casey hack on it, it's a _much_ more well-designed program. I've used emacs quite extensively, and while I don't tinker with it that much, every time I do it feels like a maze with a gazillion dead ends. It's a terrible experience most of the times, and an unremarkable one the rest.

Also, I feel like the C configuration opens the door to using other languages too, Lua comes to mind immediately. You can do that in emacs too, but it's not as straightforward.

One last thing. One of my biggest gripes with emacs is the performance (I'm using a rather low-end computer). I'm making the assumption that a moderately-sized pile of C++ will perform better than a huge pile of elisp.


I'm a huge advocate for making everything open source so good for 4coder!

I feel like a couple of posts on HN the past few months have been "X is going open source". It makes me question why projects that aren't open source when they initially release switch over to open source later. I understand keeping a repo private before having a working/stable product, but why release a product as closed source and then open it up later? Why not just keep the product off the market until it is ready to be released and make it open source before or at launch time?

I totally have space that I'm just crazy but as someone who tries to keep my stack as open source as possible I find myself not really interested in products that start closed source and eventually open up. Is it just me?

Just to reiterate: I'd rather a project go open source later vs never.

EDIT: I just realized the context for 4Coder was that it was a "we're ending this product so here's the source for anyone who wants it" -- I am super grateful for closed-source products that do this. But my question is still relevant for other projects that have done what I described above.


I would just prefer an open source alternative to vscode (that wasn't vscodium)

vscode to me is simply to bloated. They should have landed like a year and half ago.


An open source text editor that isn’t as “bloated” as VSCode? Sounds like … all of them, really, try Emacs or vim or nano or vi and maybe you’ll find something you like. But if what you want is “VSCode with all the features I use and none of the ones I don’t,” you’ll probably have to make it yourself (or at least be more specific).


Have you looked into lite (https://github.com/rxi/lite)? It's quite minimal, but easily extensible and is very lightweight.


I remember Casey Muratori was explaining his emacs key bindings in one of his handmade hero videos and he said something like 'I would use any editor that has these key bindings and split windows', so I guess that is when the 4coder idea came to life.


I purchased it a few years back, and I was having fun, but a bug on the newest version of Windows derailed me from continuing with it. I was looking for a lean, C or C++-based coder to replace my Vim/Emacs habit! I'll have to check it out again.


> that there is a totally separate build system for the custom layer which is also a big gigantic mess of its own. It involves several stages of compilation, and a number of metaprograms.

Sounds like a handful to manage closed source solo!


they are not on github, but there are builds available from itch.io as free downloads.

https://4coder.itch.io/4coder


Anyone know why development is being discontinued?


Just a stab in the dark but him (Allan) and Ryan Fleury are working on a much bigger project atm (https://dion.systems/index.html) which includes an editor too, so i'm guessing they just don't have enough time


Happy to see another code editor goes open-source! Kudos to 4coder team.


How well does 4coder support users who like using the mouse?


You can use On Screen Keyboard with the mouse.


If you select an arbitrary extent of text with the mouse, is that extent highlighted?

Are there then commands that operate on the highlighted extent?

If you double-click a word, it that word highlighted?


Is there any reason to use this over, say, Sublime Text or VS Code?


I remember an Aussie game developer on YouTube uses this editor. I hadn't heard of it before, but it seemed interesting. I eventually settled on learning Emacs instead, because 4coder was closed-source at the time. No regrets. Now I'm probably with Emacs for life. Too unique. Too powerful. Too customizable.

Side note: it's funny how formerly closed-source projects always open up to have tons of spaghetti code and bad design planning.


> Side note: it's funny how formerly closed-source projects always open up to have tons of spaghetti code and bad design planning.

I was curious after you said this, so I took a look at the 4coder source. It looks like very straightforward code. The files are clearly marked so you know exactly what they do, the APIs look simple and reasonable, and they have clearly marked folders containing all platform specific code. I'm not sure why you put this side note here, but it doesn't seem to hold water in this case. The code is definitely not spaghetti, and it looks like it was designed with quite a bit of forethought.

Edit: I would just like to add that I saw the author left several comments about the state of the code in the Readme that imply the code is very unstable. I think developers tend to be very critical of their own code, but when I took a cursory glance at the code from an outside perspective it seems much easier to follow then many other projects I've tried to take a look at. For example, for a quasi-open source project, ffmpeg is extraordinarily complex and hard to follow. So I definitely don't think OSS = super well designed code haha


I'll admit, I didn't actually read the codebase. I just took the notes at face value. I agree. People are harsh in review of themselves.

To your second point: I think it's a matter of size for a project becoming spaghettified. ffmpeg is mature and massive, so it's no surprise it's jumbled at this point. For a project of 4coder's scale, I'd wager that being open source sooner would've benefitted its design.

But, it's in the past. I'm just glad the dev left the code in the community's hands before dropping it. A lot of devs don't even do that, and the project becomes history.


> : I would just like to add that I saw the author left several comments about the state of the code in the Readme that imply the code is very unstable.

I dunno, I'm looking at their "4coder_base_types.h" header [0] and redefining basic types and builtin keywords for personal preference (yeah I know, static is overloaded) is a serious eyebrow raiser.

[0] https://github.com/Dion-Systems/4coder/blob/62fe17699a99f7d2...


The types bit is incredibly common, i think pretty much all codebases i've worked on professionally did that.

The keywords bit isn't that common, though it could be related to some custom script (e.g. having the editor parsing the "function" keywords) or doc generator.


You can just hover over the keyword with an IDE and instantly see that it's just a typedef for static. There's nothing wrong with doing something like this, especially since it wasn't meant to be OSS and more of a personal project until now. In addition, i32 is not an ambiguous typedef. It's way clearer that this is a 32 bit integer than `int` is.

Finally, big libraries do this as well. Freetype is notorious for typedefing weird stuff like FTPos and all that. It's just one way to model the domain language in the code, and that's fine by me. If the big open source libraries can do it, what's wrong with a small one doing it too?


> You can just hover over the keyword with an IDE and instantly see that it's just a typedef for static

Assuming you're browsing in an IDE, and that the IDE understands your build system. Github for example doesn't detect that it's a typedef for static. There's also the fact that the keyword `internal` is overloaded, and not in a standard way. For example, does it mean that it's not exported (a la dllexport in msvc).

> In addition, i32 is not an ambiguous typedef. It's way clearer that this is a 32 bit integer than `int` is.

but that's not what the typedef is to, the typedef is to int32_t which is a standard type that is guaranteed to be 32 bits.

> If the big open source libraries can do it, what's wrong with a small one doing it too?

typedef'ing FT_POS is totally different - FT_POS is not redefining a basic type to be used all over the project, it's defining the types used to refer to coordinates. Freetype is also written in C, and is almost 30. You also better hope that when you're using these typedefs that any third party libraries (e.g. free type) also use the same aliases.


Internal define for static is actually really good, maybe different keyword like 'fn' would be better cause it doesn't clash with windows.h but, redefining static to a different keyword allows you to search for it in a codebase while static can be a local variable, global variable, function.


I fundamentally disagree. The reason programming languages work are because things are consistent, if everyone defined their own flavor of keywords then you're not using the same language anymore. On any project I've worked on I would fail a code review for attempting to introduce something like this.


He is consistent, all functions have that keyword, it's his codebase. He has also has the api(custom) define so his metaprogram knows which functions to publish into the plugin system. C++ syntax is not very regular so if you want to consistently catch all functions with a homemade parser or grep or whatever it would need to be complex, it needs to do backtracking, account for stuff like operator+ etc. If you do what he is doing it's trivial to make a metaprogram for C++ that would consistently catch all functions.

It's great that you would fail a code review in your company for that but it's not relevent.


The API custom define is perfectly fine. (Well it sucks but that's not his fault, that's a c++ language deficiency). But making custom defines for fundamental types and for language keywords because you'd rather type internal than static is just messy.


Well, I guess at this point the only thing I can say is: I disagree.


Regarding your side note, they're at least transparent about that. :)

> I DO NOT recommend learning from this codebase, especially not with an uncritical eye. It may be a useful reference for certain algorithms and architectural ideas, and it certainly contains some cautionary tails. But if you are a beginner, I encourage you to aim for more than emulating the style and structure of this codebase.


For sure. Perhaps I came off as way too critical. I write my fair share of spaghetti code. I think we all do. But at least in an open fashion, things are corrected far sooner with more eyes on it. That's all I'm saying.

The editor seems cool regardless.


said Aussie game developer on YouTube: https://www.youtube.com/c/RandallThomas


note to future opensourcers: please, please, explain what the project does in your README. People get RTFM so often, maybe it’s time for WTFM too…


yup. was reading the readme and going .... "yes, but WHAT IS IT?!?!"


I get that the whole handmade shtick is to avoid existing libraries to allow for better software, but when you see things like [0], you realise what cmake is for...

[0] https://github.com/Dion-Systems/4coder/tree/master/bin


Those scripts are incredibly simple, and super easy to write and maintain. It's a breath of fresh air compared to other build systems. But I won't defend the cpp, and neither does the author really.


But the scripts don't work without the cpp - you can't really excuse the build system without excusing the cpp file. All of the actual logic for the build system is in the cpp file, and at least on windows it fails at the first hurdle - it blindly assumes cl.exe is in your path which is not as common on windows as it is on linux for example.


It’s perfectly legitimate to say “run the script from a shell where the environment is set up”. In fact, it’s preferable, and you should only attempt to be “smart” and search for that type of thing if you really need it — why use any search logic at all, if it can be avoided. vcvars and its modern incarnations exist to facilitate this.

I had already skimmed the cpp (at least the `build` functions). It’s not necessary for what they’re doing. It could be easily replaced with a shell script. I know because I’ve done something very similar, supporting I think even more build variants and compilers. This is mostly copied from a real script, except I mocked up `build` somewhat, since the real one is slightly more complicated.

    build() (mkdir -p "$1" && cd "$1" && shift && “$@“ ; )
    ...
    declare -a gcc=(-std=c89 -g -D_GNU_SOURCE -fpie -pie -fno-strict-aliasing -Wall -Wextra -Wformat=2 -Wstrict-aliasing=2 -Werror)
    declare -a clang=(-std=c89 -g -D_GNU_SOURCE -fpie -pie -fno-strict-aliasing -Weverything -Wno-unused-macros -Wno-used-but-marked-unused -Werror)
    declare -a debug=(-O1 -D_FORTIFY_SOURCE=2 -fno-omit-frame-pointer -fno-optimize-sibling-calls)
    declare -a asan=("-fsanitize=address,undefined,leak")
    declare -a msan=(-fsanitize=memory)
    declare -a isan=(-fsanitize=integer)
    ...
    build "$build/x86_64-linux-gnu-gcc-addrsan-debug" gcc "${gcc[@]}" "${debug[@]}" "${asan[@]}" -o example "$src/example.c"
    build "$build/x86_64-linux-gnu-clang-debug" clang "${clang[@]}" "${debug[@]}" -o example "$src/example.c"
    build "$build/x86_64-linux-gnu-clang-addrsan-debug" clang "${clang[@]}" "${debug[@]}" "${asan[@]}" -o example "$src/example.c"
    build "$build/x86_64-linux-gnu-clang-memsan-debug" clang "${clang[@]}" "${debug[@]}" "${msan[@]}" -o example "$src/example.c"
    build "$build/x86_64-linux-gnu-clang-intsan-debug" clang "${clang[@]}" "${debug[@]}" "${isan[@]}" -o example "$src/example.c"
    ...
Very easy to write a powershell version for Windows.

If you know the compiler flags you want, nothing beats literally just writing them all down in a script, when it comes to debugging issues.


> you should only attempt to be “smart” and search for that type of thing if you really need it — why use any search logic at all, if it can be avoided. vcvars and its modern incarnations exist to facilitate this.

But that's the standard for how to do things on Windows. That's how it works. Sure you might not like it, but personally I don't like the implicit assumption that everything is configured and "there" that your scripts and the OPs scripts assume.

Here's the alternative in cmake though:

    cmake_minimum_required(VERSION 3.10)
    project(hello_world)
    add_executable(app main.cpp)
All of a sudden you have support for all major platforms, the workflow is the same (meaning you don't have platform dependent scripts like build_win and build_linux), it works out of the box with all major IDEs/editors, supportd optimised and stripped builds easily, and abstracts that away across platforms (-O2 Vs /O2, /Z7 instead of objcopy +strip). These are problems that have been solved for 30 years, and makeshift bash/batch scripts only make it harder for people to understand what's going on.


It’s fine if you want searching, but I disagree that it’s implicitly wrong not to do it because searching is supposedly standard and how everyone else is doing it. Never mind that there exists plenty of software (with far more specific requirements than “give me an object file/dll/executable, compiler version/ABI/options be damned”) that doesn’t use such an approach. For example, try to tell CMake how to create a freestanding executable (one that doesn’t link to the system C library). You end up fighting against the abstractions inherent in its design. Which is to say, depending on what you want to do, it might not be a good fit, and you may be better off just writing out your compiler options yourself.

Here’s part of the powershell version of a build script I have used, where I had fewer build variants, so I didn’t even bother to make a `build` function, I just copy paste the `out=... ; if { ... }` block and change the value of `$out` and which compiler and options are called. Note that you can accomplish this with far fewer variables if you don’t want quite as many build variants as this script supports — it was written as an illustration of how flexible you can make things, rather than how simple things can be, so it supports clang, clang-cl, cl, zig cc, with variants for asan, freestanding, for architectures x86, x64, aarch64:

    $ml = @("-nologo","-c","-Zd","-WX","-W3")
    ...

    $clstd = @("-volatile:iso","-permissive-")
    $clangcl = @("-nologo","-diagnostics:column","-Z7","-Zo","-FC") + $clstd
    $cl = $clangcl + @("-WL","-Gm-","-utf-8")
    # -Wall > -W4 but includes very silly things like "function not inlined" for snprintf
    $clwx = @("-WX","-W4","-wd4100","-wd4127")
    ...
    $clfree = @("-D","EXAMPLE_FREESTANDING","-Zl","-GS-","-sdl-","-Gs9999999")
    $clfreedebug = $clwx + $clodebug + $cldd + $clfree + $clnortti + $clhash + $cllink + $link + $linkfree
    ...
    $out = "./x86_64-windows-free-cl-debug"
    if ("all" -in $targets -or (split-path -path $out -leaf) -in $targets) {
        new-item -itemtype directory -force $out | out-null
        try {
            push-location $out
            $buildcount++
            write-host (split-path -path $out -leaf)
            ml64 @ml $src/example-x64.asm
            cl "-Fm./example.map" "-Tc$src/example.c" ./example-x64.obj @cl @clfreedebug
        } finally { pop-location ; }
    }
If you have an issue, you can easily change compiler options. It will always work and won’t suddenly break if you install a second set of compilers for another project, whereas CMake will have to choose which one and might pick wrong. Likewise, there is no potential for an upgrade to CMake to decide to use different compiler options that interfere with the freestanding stuff. You lose CMake’s abstractions, but like I said, that may be a good thing, depending on what your requirements are, assuming you have detailed knowledge of the compiler (which I’d rather accumulate than detailed knowledge of CMake, and I would know, I have both).


> For example, try to tell CMake how to create a freestanding executable (one that doesn’t link to the system C library). You end up fighting against the abstractions inherent in its design.

What's wrong with target_link_options(foo PRIVATE -static-libgcc -static-libstdc++)

I don't understand how that particular powrshell block is more flexible than cmake from what I can see it does less (it's windows specific - mine works on Linux, Mac, windows, and also works with cross compilation toolchains out of the box). If I want to add specific warnings I can use target_compile_options to get exactly the same behaviour.

> won’t suddenly break if you install a second set of compilers for another project,

I'm genuinely curious - have you used cmake? Cmake has sensible defaults but is completely customizable. Cmake uses platform standards - if you overwrite the CC variable it will use that toolchain instead. Cmake has its warts, for sure, but lack of flexibility is definitely not one of them.


I have extensively used CMake since years before it supported the target_ options, and years afterwards. So I was using it before, during, and after the breath of fresh air that was the shift to what is now called “modern CMake”, i.e., bjam/b2 style (define targets with private/public/interface dependencies/options). As you can guess, I’ve also used bjam/b2 extensively. I’ve also used meson to a more limited extent. And when I say “used”, I mean where I’ve written many build files from scratch myself, in a professional capacity.

I’m always the “build system expert” on my team because of how much experience I have. If working on a team where people know the compilers, and where build times are kept reasonable (so not using lots of templates in C++), then on a practical level I recommend not using a build system at all, and just writing a super simple shell script (unity builds are easier there too). I still recommend CMake sometimes though — especially if the main requirement is just “gimme some machine code, who cares how, and who cares about using ASAN, etc.”, and the second most important requirement is to ship the code to the open source community.

BTW, I mean freestanding as in “does not link to any C library”, rather than “statically links to the C library”.

   declare -a free=(-std=gnu99 -DEXAMPLE_FREESTANDING -nostartfiles -nostdlib -nodefaultlibs -ffreestanding -fno-stack-protector)


> then on a practical level I recommend not using a build system at all, and just writing a super simple shell script (unity builds are easier there too).

But the minute you introduce someone who is familiar with windows or prefers an IDE workflow this falls apart. You are also limited to the platforms, compilers and IDEs that you write the script to support. That's not flexible, that's inflexible. Also, cmake has supported unity builds out of the box for a few years now [0] and before that cotire existed.

> I mean freestanding as in “does not link to any C library”, rather than “statically links to the C library”.

Ok, great so just use target_compile_optiond in the exact same way as you've just listed the arguments? I don't see how cmake stops you from doing that by architectural decisions at all.


Writing a new “call the compiler like so” line is pretty easy — pretty flexible, I’d say. Let’s say a new compiler came along tomorrow, where all the options have plus signs instead of minus signs (“+O +DPREPROCESSOR_DEFINITION”), I’d have an easier time adding a single line to a script than I would writing a toolchain file for CMake, or more likely, having to look at the CMake source code. That is a not unreasonable, but still quite pathological example. But it’s a window into the annoying world of manipulating compiler commands via the indirect mechanism of CMake. If you have very precise requirements, especially regarding order of arguments on the command line (a common issue with cl but to a lesser extent other compilers also), then using CMake can be like performing surgery with chopsticks. If you have simpler needs, it can be like eating sticky food with chopsticks — i.e., a convenient way to keep your hands clean. It seems like I just disagree with how steep the slippery slope is between those two points.

As for editors and whatnot, I’ve worked on teams where people used Visual Studio, vim, CLion, VS Code, via these scripts. Most with full IDE integration, some without (some people don’t use it). The easiest pathway is to generate compile_commands.json — ironically, this is already something extremely close to the script, so it’s trivial to automatically generate from the script.

I was just saying as a postscript that I meant something different by freestanding. I agree that it’s totally orthogonal to your argument and mine.


> If you have very precise requirements, especially regarding order of arguments on the command line

Then write those arguments in that order with target_compile_options?

> The easiest pathway is to generate compile_commands.json — ironically, this is already something extremely close to the script, so it’s trivial to automatically generate from the script

You keep saying it's trivial to extend it to do X and to do Y, and yet it's even more trivial with cmake - it already does it. At a certain point you've just poorly reimplemented something else

I've never had to add support for a brand new c++ compiler to cmake personally, so I'll have to trust that yes it's easier to modify your she'll svript to support that, but given that 99% of my build systems work is generating IDE files and building with MSVC, GCC and clang (even across servers, mobile devices and consoles), ill continue to recommend the system that works for that rather than a hypothetical new c++ compiler that will likely require more time spent on conforming the code than modifying a cmake file.


I’ve repeatedly wasted hours on issues intrinsic to CMake (not respecting options, overriding my options by passing contradictory ones in a way that’s beyond my control, etc., but also you guessed it, bugs in CMake, or issues with its awful and poorly thought out language). I’ve never had issues intrinsic to a script that just calls gcc or what have you. Any issues I’ve had with builds using scripts are to do with learning how a specific compiler works, something I have to do anyway, and therefore it’s never wasted. Sometimes but rarely enough, I even use CMake as a starting point (especially if I can’t access my previous scripts right at that moment), but I just extract the resulting commands, write the script, and delete the CMakeLists.txt.

I have ported scripts to use `zig cc` (though that uses a gcc/clang-like set of options by design, so easy in CMake also), and `tcc` (again gcc/clang-like, but only supporting a small subset of options, so with a script, I can just remove those arguments and the “port” is done — suppressing options CMake wants to give is harder).

As for poorly re-implementing something CMake already does, I disagree when CMake doesn’t do all that much (CMake’s language is not great at doing much beyond specifying basic target options). Writing compile_commands.json for a gcc toolchain is a one-liner callout to python, just `print(json.dumps({“directory”:sys.argv[1], “arguments”: argv[2:], “file”: argv[-1]}))` given the same arguments as that `build` function from earlier. This tiny extra up-front cost reaps rewards for every ship cycle where you don’t have to fight with the tool, or even learn it in the first place (think about issues like passing -D options to CMake, then reconfiguring the build in the same build dir but with different options — does this always work or do what you expect? in my experience, absolutely not). I won’t complain about the general quality of typical CMakeLists.txt files because the general quality of scripts isn’t great either! But it’s nice to limit the number of poorly taught and therefore poorly written scripting languages involved in the build, and shell is far more generally applicable, so it’s a worthwhile investment compared to CMake/meson/waf/b2/xmake/bazel/whatever else is coming down the pipe. (If you’re a massive organization, we’re talking collaboration across the globe in ways that exceed normal methods of human collaboration, and that result in monumental build times and complexity, then invest in a fancy tool like bazel, it’ll probably be worth it for you).

I’m perfectly happy for people to continue to use CMake, so I’m not saying other opinions are wrong, and I wouldn’t immediately proselytize a team to switch away if I joined it — one of the reasons I don’t tend to use it is because focusing on build tools is a huge time sink, but it’s also a time sink to start a debate about transitioning to a script. But I’m happy not having to deal with it in many situations.


I wonder how do people come to a decision that it's a good idea to start their own text editor when vim and emacs exist. For this one my guess is the author is a c/c++ developer and wanted to have their editor cusomizable in their desired language. Would that be enough?

It's also not a common thing for projects like that not to start as FOSS.


There's a sizeable cohort of people (that overlaps largely with the "handmade network") who are tired of hearing "don't reinvent the wheel" and see virtue in rebuilding software. I think it's a good attitude personally, since it's _how we get better things_.


    - Fun
    - Learning new things
    - No decades old quirks & constraints through backwards compatibility
    - Same concept with fresh ideas & multithreading
A lot of good reasons imho, which is why I'm also tinkering with my own little editor. In the worst case I'll have learned a ton and have a small collection of useful libraries. Some innovation in modal editing after 30 years might be nice.


> I wonder how do people come to a decision that it's a good idea to start their own text editor when vim and emacs exist.

Vim/Emacs have a huge community and ecosystem, but they aren't exactly perfect. I don't see anything weird about people writing their own editor.

I'd personally love to have a Python-programmable editor a la Emacs, and if I ever write a text editor, it's gonna be like that.


Eh, it's part of the handmade effort. Sometimes building your own tool is just fun.

+ You don't need to learn a bespoke language to customize 4coder (vim_script in vim's case)


> I wonder how do people come to a decision that it's a good idea to start their own text editor when vim and emacs exist.

It's really a smaller decision compared with the point of view that text files as source code are "holding back our tools - https://dion.systems/dion_format.html

So these guys are really thinking big.


If text files are holding you back. Smalltalk exists...


What do you mean by that?


In Smalltalk there is no such thing as source files. Your program is an image which can be freely modified and dumped. Look at Pharo[1] which is a modern Smalltalk environment. You start it up and create classes in the IDE, but never do you create "source files".

[1] https://pharo.org/


"6.18 Saving code in a file" Pharo 9 by Example, page 75

https://books.google.com/books?id=hfpwEAAAQBAJ&lpg=PA75&ots=...

> In Smalltalk there is no such thing as source files...

    $ cat fact.st
    Stdio stdout 
        nextPutAll: 100 factorial printString; 
        nextPut: Character lf.!
    SmalltalkImage current snapshot: false andQuit: true!

    $ bin/pharo --headless Pharo10-SNAPSHOT-64bit-502addc.image fact.st
    93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000


I was obviously simplifying it for my explanation as I didn't want to write a story. Yes you can file-out and file-in but thats not the primary mode of operation when working with the IDE.


Look where that wrong explanation leads —

"#1 it's a burden of knowledge to shift from the Unix filesystem to the Smalltalk runtime. There... are no files."

https://news.ycombinator.com/item?id=31474424

"Pharo by Example" doesn't need a wrong explanation —

"It may seem like the image file should be the key mechanism for storing and managing software projects, but in practice that is not the case at all. There are much better tools for managing code and sharing software that is developed in a team. Images are useful, but you should be very cavalier about creating and throwing away images."

"2.5 Saving, quitting and restarting a Pharo session" Pharo 9 by Example, page 14

https://books.google.com/books?id=hfpwEAAAQBAJ&lpg=PA75&dq=p...


Damn, that sounds really cool. The IDE has some amazing features too. The IDE states that you can modify the program while it's running is that actually working well? Cause it seems like a very volatile feature that can go very poorly.


> The IDE states that you can modify the program while it's running is that actually working well? Cause it seems like a very volatile feature that can go very poorly.

It's actually a paradigm that works well in many programming languages besides Smalltalk such as Common Lisp, but for some reason seems to be forgotten with "modern" and "trendy" programming languages. Except for.... JavaScript!


Just because you can doesn't mean you must — it depends on your process.

https://en.wikipedia.org/wiki/Hot_swapping#Software


Yea, being able to customize with C++ is peferable for some people compared to vimscript/lisp/etc. Then you have the issue where emacs and vim can choke under reasonable file sizes. The other part is vim/emacs aren't as performant as they could be


Though good and powerful editors, there are reasons why emacs and vim have a very small user share. (Everyone I know prefer VSCode when not using an IDE.) Even JEdit and Sublime each have a dedicated but a comparatively small userbase.


Proably of the people you know also only a small share is active on HN. At least that's the case for me.

I'd guess amongst HN users the share of emacs and vim users might be higher than for developers in general.


Both are dead in the water architecturally if you want to use a text editor that can render fonts and other graphics at 60 fps on a normal desktop.


You don't need to render anything at 60fps. Emacs is not a game, it only renders when needed and that it does pretty fast.


What is the license? I don't see it at a glance.


MIT

It's placed in README


Thanks. I was looking for LICENSE or COPYING and didn't see either.


Read on: it's MIT


Mit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: