Hacker News new | past | comments | ask | show | jobs | submit | toprerules's comments login

Awesome. AI isn't making Vim less relevant, it's now more relevant than ever. When every editor can have maximum magic with the same model and LSP, why not use the tool that lets you also review AI generated diffs and navigate at lightning speed. Vim is a tool that can actually keep up with how fast AI can accelerate the dev cycle.

Also love to see these local solutions. Coding shouldn't just be for the rich who can afford to pay for cloud solutions. We need open, local models and plugins.


Thanks! I totally agree. I’m looking at ways to further tighten the pairing between Vim’s native tools and LLMs (like with :diff and :make/:copen to run the code, feed errors back to the LLM, then apply the fixes, etc). The catch is model variability—what works for Llama doesn’t always work with R1 because of formatting/behavior quirks, and vice versa. Finding a common ground for all models is proving tricky.

100% of people who dislike systemd can't come up with a better solution that solves all of the problems that systemd was created to solve.

> 100% of people who dislike systemd can't come up with a better solution that solves all of the problems that systemd was created to solve.

Perhaps because these people consider that solving all the problems that systemd claims to solve does introduce too many other problems, so they tend to consider it to be the better solution to solve, say, 90 % of the problems that systemd solves, but introduce less new problems.


That's every piece of technology. If this wasn't the case, we wouldn't have jobs. Resistance to change on the basis of "it will cause more problems" halts all technological progress. If you think you can do better, show. me. the. code.

All the problems that systemd is "solving" is a large part of what we dislike, I think. So, no, of course we wouldn't have a replacement software.

My recent experience was trying out Fedora atomic. I love that idea. I found systemd is kinda nice for service management.

But still, I kept running into issues with it spreading into everywhere, doing its own thing, and difficulty working around it. Partially figuring out the atomicity, and partially that no other distro I've tried has leaned in that hard to systemd.

I'm moving on to Arch since it looks like I can at least get this out of my boot process.

(Or, more on the topic of the thread, Tumbleweed looks like an interesting take on being able to roll back to known working states as a replacement for Fedora atomic.)


I think Gentoo explicitly does not do systemd, might be worth it to look at that if you want to avoid systemd, considering that I think the "official" stance of Arch is systemd.

There's also SixOS coming soon (https://events.ccc.de/congress/2024/hub/de/event/sixos-a-nix...). NixOS does kind of a similar atomic thing, so you might enjoy that, but vanilla NixOS is systemd based, so once SixOS drops you might get exactly what you want.


Arch does have systemd by default, but there seem to be options to do an install without. Probably a lot of struggle down that road for a desktop environment though.

I saw some of that SixOS and I am really interested. There was an "ownerboot" tool they linked to that also looked nice to me.


Gentoo can do systemd or OpenRC, but I think systemd is default. I would recommend Guix System if you don't want systemd, as it uses GNU Shepherd instead.

I'd be mostly concerned with package selection with Guix System. Don't you have to go out of your way to install anything proprietary on there? Also doesn't it use a Linux kernel without any blobs? I would think that drivers could be an issue.

Yes, you're correct on both counts. You can add non-default channels and get a different kernel if you so desire. Personally I have stuck to the defaults.

The best config system I've ever seen used plain old Python to generate static configs. Everyone knows Python. Python is easy to do data munging in, as demonstrated by it's popularity as the #1 data science tool. There's boundless libraries to make Python more functional, use stricter typing, or reduce the amount of side effects it can cause. Even Starlark is just a dialect of Python.

You can spend decades building a complicated configuration language, use a bespoke functional language as Mill does, but if you're a single company that can enforce code quality and just wants to get the job done, I feel like everything else is just unnecessary and over-engineered to scratch some academic itch for a "better system" that enforces "purity" at the cost of velocity.

I also think that now that LLMs are on the rage, how much context do you think they have for bespoke config language vs Scala vs Python? I think we know the answer to that one.


Python is a terrible choice for that sort of thing. Who really wants to have to set up a venv and deal with pip nonsense just to write a config file? Hell even installing Python is sometimes difficult.

You could say the same thing about setting up a JVM...

JVM doesn't have everything all over the place, JAVA_HOME and PATH suffices.

It doesn't require reaching out to tons of C libraries when performance is called for, and there are at least two free beer options to AOT compile to native code, if required.


I definitely would!

You don’t need any of that now that we have uv (https://github.com/astral-sh/uv)

Not really. Uv is great but it doesn't really help here. If an app uses Python as its configuration file format then the app is running Python. It's going to do `python3 config.py` or similar. It doesn't know anything about uv.

So you would still need to create a uv project, run `uv sync` and `uv activate` or whatever and then run your app. Not practical.

The only option if you use Python as a config file format is to stick to old features (Python 3.6) and not use any third party libraries. But op was saying third party libraries are one of the benefits of using Python...


Stick to the standard library as of an oldish version of python (3.6?) and it's pretty much zero-install zero-config.

On a Mac, Python has always been a challenge.

Up until recently Apple only included Python2 and so developers used Homebrew to install Python3. Now it’s very common to find two versions of Python3 installed on a Mac developer’s laptop that conflicts with each other.

You really want to be using virtualenv.


Hence the advice to stick to the standard library - because then it doesn't matter all that much. I'm not sure what the full set of environments I've tested my Python 3 scripts actually is, but they've run OK on all the various cloud CI systems I've tried, plus my laptops, VMs, and work PCs.

All a Mac user has needed to do was install from https://www.python.org/downloads/ and then run python3 in the shell. Even if you use MacPorts or brew or conda or whatever, there's a distinct command to run Python 3 instead of 2.

I get that Python's package manager situation is terrible, but like the other user said, you only need built-in packages to spit out a config json or whatever.


So you would then you end up with three Python3 installations.

And if you install from the website it doesn’t override the path. So will still be using the Apple or Homebrew one.


If you install it 3 times then yeah, but even then, all 3 of them will still work.

But I could've sworn the python.org installer set the PATH. If not, that's kinda annoying.


If only Python had the equivalent of npm.

Thought that was pdm. Never saw it used so far.

This does work well. A team I was on at a past job did exactly this. On Unix the service literally ran `std::system("python config.py >config.json")` on startup.

The problem with this is that the answer to the question "what kind of configuration can I expect?" is "simulate the script and find out."

If the script is written well, and is short, then the parameters that are filled in by the runtime environment are apparent. Over time, though, there is a risk that the script will not remain written well, and it almost certainly won't remain short.


An approach I use is splitting my config tools into 2 stages

Stage 1 creates a "explicit" config that can be exported to plaintext that contains exactly what is going to be created/modified with no abstraction/simplification

Stage 2 applies the "explicit" config

You get to be as clever as you want in stage 1 to avoid excessive copy pasting or not being able to know what your tool is going to do because all you have to go on is some homegrown DSL


You run into the same problem with config DSLs, except now you're dealing with a DSL. Config is almost never going to be static.

True. One advantage I can imagine for a DSL is that it constrains what is possible and optimizes (syntactically) what it's supposed to be for. I think that the author of Nix justified its language that way.

The counterargument is "eventually you'll need every facility provided by a programming language, so just start with a programming language."

I'm not sure how I feel about it. The YAML templating situation in Kubernetes is a [shit show][1]. Then again, I did once cave into the temptation of writing a [lisp-like XML preprocessor][2] to make my configurations less verbose. It doesn't have any access to the environment, though, so it's not a general purpose configuration language, just a shorthand for static XML.

[1]: https://www.davidgoffredo.com/no-string-templates

[2]: https://github.com/dgoffredo/llama


What constraints are needed? I've used DSLs that are almost Python but not quite, I think because they were hermetic and deterministic. Even those ended up being produced dynamically using some higher level config DSL or just regular code. Like once you're doing RPCs, it's general programming language territory (though there are also DSLs that do this, which is cursed).

And yes, I have very bad memories of Kubernetes YAML, also YAML itself.


Scala is hardly some obscure bespoke language. It's a top-20, maybe top-10 programming language, that's been around for 20+ years (and had far fewer breaking changes than Python over that time). Most Python translates directly into Scala, but with the benefit of a proper sound type system and full IDE support. And it's a great language for data munging.

The claim sounded outlandish, but Scala looks indeed to be around the top 10~20 languages in hiring for instance:

https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...

Scala is only in 0.5% of the scanned job offerings, and is far far behind the major languages in numbers, but I was surprised there's more demand than Rust or even Perl to be honest.


I'm not surprised it's above Rust and Perl, but it's below Dart?! Ouch.

Might as well just list it as Flutter (Dart).

this “top 10” your LLM hallucinating? :)

> Everyone knows Python

No they don’t. Just like everyone doesn’t know Cobol, Fortran, Scala etc.

But by having a programming language as your build tool you now make it harder for new people to onboard. As in order to build project they often need to some unique, specific to the language syntax. And in order to find this syntax they look around on Github and because it’s a programming language every project has their own unique, specific to the project approach.

Versus something like Cargo.toml where it’s simple and consistent regardless of which project you look at.


> No they don’t. Just like everyone doesn’t know Cobol, Fortran, Scala etc.

Sure somebody might not have Python experience, but it's pretty easy to just not hire someone who says they don't know Python and isn't willing to learn for the role. I don't know that you'd filter out many candidates out of any random 100 devs.


I am talking about graduates and others new to programming.

Of course they are willing to learn for the role but making it hard for them in the beginning can forever turn them off a language. That has been a big problem with Scala and Spark.


I've never seen a language used for ancillary purposes be the make or break on hiring for a role, it's always just been expected that you'd pick it up as you go. And IMO, python is the least offensive compared to stuff like Perl, Ruby (for Chef) or whatever the heck Terraform is.

I have onboarded dozens of Data Engineering graduates in using Spark.

In the beginning this was with Scala and every single one struggled with SBT.

Giving developers unlimited flexibility in how they create build files is a bad idea.


Can they use pyspark?

HCL?

> Sure somebody might not have Python experience, but it's pretty easy to just not hire someone who says they don't know Python and isn't willing to learn for the role.

This works just as well for Scala.


There are way more people who know Python than Scala, and it's also an easier language to get started with.

So then they need to know toml (Tom's Obvious Minimal Language)? https://github.com/gtk-rs/examples/blob/master/Cargo.toml I don't know what this file says.

I think it's hard to argue that Cargo.toml is any simpler than Python. Json might be ubiquitous enough for anyone to read and understand, but if Python is foreign than toml is no better.

I don't know about Python specifically, but using a language I'm familiar with to generate ninja files (+ any header/environment/&c) for the build has become my go-to way of doing builds in the past 18 months or so.

> There's boundless libraries to make Python more functional, use stricter typing, or reduce the amount of side effects it can cause.

What are some examples of a library that can limit or prevent side effects of a piece of python code? I could use one right now.


> I also think that now that LLMs are on the rage, how much context do you think they have for bespoke config language vs Scala vs Python? I think we know the answer to that one.

Nothing against Python, but of all the reasons to choose a technology, whatever is more represented on the dataset of some LLM is the worst reason.

This is a death spiral. There's no hope for the future of this industry if newcomers are thinking like this.


I cared about programming languages when I was a newcomer. Stopped caring about 10 years ago. They're just tools, each with their own gotchas and different design choices I couldn't care less about. Between two tools that both work ok, I will definitely pick whichever one my team and I can learn the easiest, and that includes LLM coverage.

Ah yes, the age old belief that all software is complex enough that one must first run some other bespoke turing complete program to build every single piece of software.

And of all the languages to pick for this, python, with it's non-hermetic execution environment is bound to bite you in the ass, once your buildscripts start depending on libraries. Oh, you could use poetry to solve the library issue with python, or maybe it'll be setuptools, pip or whatever is the flavour of the month in python packaging.

After fighting with Nix for a sufficiently long time, I think most language specific build tools are not neccesarily the best solution to the problem of automating a build for bit of software written in language X. Complex projects will eventually evolve to depend on multiple languages (unless you're the Linux kernel), at which point the specialized language build tools turn into cumbersome barriers in the build process, where different build tools are not aware of the caching, conventions and configurations of any other tool. As such, in an ideal world, any new language would come with a compiler or bundler that can be supported well by higher level build/packaging tools. And bespoke python scripts ain't that.


Can't really say enough good things about Vim. I retired early as a staff software engineer because of the work I did using Vim. From hacking silly games in C in high school to now, I've always been able to use Vim and run circles around any "modern" text editor or IDE. I feel like I owe Vim as much public praise as I can give so others can reap the rewards like I did.

What really frustrates me is how little people seem to want to invest in their tools, or the outright lies they tell themselves about how much configuration they need to use Vim on a daily basis. My Vim config is 200 lines, and my last commit was 3 years ago. I've invested maybe a few days of my life into configuring Vim and I use it 8-16 hours a day.

Vim can do so much on its own. It has autocomplete, fuzzy finding, integration with build systems, file search, navigation using a directory browser, jump to symbol, parse compiler output to jump to bugs, support for gdb and breakpoints, a built in terminal, copy to and from the system clipboard, and with literally 8 lines of code you can get support for every linter, LSP, etc. known to man, fuzzy finding, and git integration that let's you use native git commands with seamless editor integration.


I've been rocking dual Pycharm w/ Vim bindings plus a neovim setup and bounce between. The only thing preventing me from going 100% neovim is Pycharm's python debugger.

I have setup neovim-dap[0] with all the related plugins, it works for simple script but it bugs out and crashes when running our Flask web app. I rely heavily on the Pycharm debugger to step through our app.

Have you had a good experience with setting up a debugger in vim/neovim or is that not part of your workflow?

0: https://github.com/mfussenegger/nvim-dap


I don't use DAP so I can't help you there. I use vim for editing code. When I need to debug I run lldb or gdb in a separate tmux window, if I need to see the code I enable TUI mode. There are many wrappers including TermDebug, Vimspector, termdbg, etc. to add visual breakpoints, but I have never found them necessary - because again gdb/lldb can already show you the code in various formats including assembly. It sounds like you are using Pycharm in exactly the same way you can already use a standalone debugger.

For Python I find a built in debugger especially unnecessary. Most interpreted languages have a debugger that can be triggered from the source code to drop into an interpreter, and Python is no exception - see pdb and breakpoint().


A little ashamed that I have never used pdb directly, I'm going to try to broaden my horizons and learn about it more :) Appreciate your perspective.

Cheers


Could you share your config?

Technology has diminishing returns. If technology does everything for us then what's the point of existing.

In what way is a hammer an intellectual tool?

That wasn't the claim. The claim was:

"As technology evolves, humans lose their capacities as their tools get better."


I'm a Vim user, but I occasionally try JetBrains/VSCode to see what I'm missing out on and RustRover, CLion, Goland etc. are by far the most sluggish pieces of software I've used. I am demonstrably slower on them than using Vim with my fuzzy finder, LSP, and AI integrations.

I thought Fleet might add the "magic" to something more VSCode like, but I also don't understand the long term vision.


Same. Although I haven't tried VSCode in a lot of years. I did at one time have it set up to emulate vim quite well. I used it as a daily driver for over 6 months. It would puke the bed at least once a day, reseting the theme, losing all keyboard shortcuts. I'd restart it and go on my merry way.

I keep my Kotlin LSP for NeoVim up to date but it's just not a great experience. I often have to open IntelliJ to sort out import issues. The entire Java community is built on "don't worry about knowing where your imports are coming from, your IDE will do that magic for you". So much is this the case, that the first Manning Kotlin book even said it. Because of this, I was eager to give Fleet a shot. My impression was, "you won't build an LSP because you're afraid of losing revenue... but you'll build this?" Ok. I guess that makes sense - keep people on your playground.

I sure do LOVE Kotlin as a language. But telling me I have to use your product to write it? I'd rather write Go... or even Typescript at that point. Both of those have really nice experiences in a simple text editor + LSP.


Concur. I find that RustRover and PyCharm are outstanding in terms of refactoring, introspection, and treating projects as unified. But they are so slow. Lately, even copy+pasting may take seconds or longer. This and other actions sometimes terminate with an error about being too complex.

Can't I have both power, and responsiveness?


IntelliJ IDEA is their real product. Once you've added a debugger, test runner, and decompiler then you're ready to program Java.

Pretty sure IntelliJ comes with all those things?

That's why I use JB products. I download them, start them up and that's it. I don't need any separate plugins, they just work perfectly out of the box.


Also probably part of the reason why they're so bloated. IDEA with just a single mid-sized project can and will take 10GB+ memory (simple java+gradle for spring or android). Out of the box it has ~100 plugins installed, most of which are useless for most people.

It does work well but it's often too much and uses even more memory than vscodium.


Are you using the ram for something else? Would you prefer to have 30 out of 32gb sitting unused?

Ram is cheap, I don't see why people complain that it's being utilized. Doesn't bother me at all.

I haven't looked into it but I would assume you can disable these unnecessary plugins if you don't want them?


It sounds like your perception of how a computer works is incredibly flawed. ram doesn't sit unused, the kernel uses it for caching and locality. The more RAM you give the kernel, the more you can have resident in the slab cache, the page cache, the filesystem cache, the network backlog, etc. Even on very large machines with more RAM than an average desktop, the kernel can still make use of almost all of it.

I work on efficiency so when people say things like "it's ok for my IDE to be an inefficient pile of garbage that locks up resources" it makes me wonder what kind of program they are producing.


You're both right. The issue here is that both the JVM and the kernel use algorithms that can use all your RAM to speed things up, and there's no good way to know which side should 'win' (to get the best performance).

Historically the JVM will happily use all your RAM even if it doesn't need to, because that reduces the amount of GC work required which increases CPU time available to the IDE for analysis and other tasks. It can be told there's a limits, in which case it'll spend more time GCing to stay under it.

Modern JVMs changed this old default and will wait until the app is idle then start reclaiming memory and releasing it back to the OS. I guess it depends what you mean by "mid sized" but 10GB is quite a bit. It'd be worth checking that everything is running on a recent JVM. Gradle in particular can be a hog if you run it on old JVMs.


I use Rider for .Net and WebStorm for JS. Before I left work I checked, with our small/medium sized project each of them were using a little under 2gb according to windows Task Manager. Adding in some other related processes I'd estimate the two combined might be using 5-6gb in total. So I have at least 26gb left over.

To quote Lord Farquaad: That is a sacrifice I am willing to make


I didn't say it's an inefficient pile of garbage. It's obviously making use of the memory in order to provide information and quick navigation etc.

My computer works completely fine while I have multiple jetbrains ides and browser windows, Docker etc running.

So maybe your perception is the one that's flawed. I know for a fact that my computer can handle it, but it seems like you mistakenly believe that it can't?


I've reached a level (staff engineer at MAANG) that I consider to be difficult to obtain using plain old Vim, and I've noticed that other high performers tend to still use Vim or Emacs. There's plenty of amazing developers who use VCode, JetBrains, etc. but I think there are certain personality traits - seeks out barriers to entry, likes to demagic tools through exploration, values completely open source, highly tinkerable community driven projects, etc. that explain this phenomenon more than feature set or ease of use.

When I read about how complicated VSCode's remote editing was (which I knew about before this article) it just made me want to use VSCode less. I can just ssh into a machine and use whatever editor is on the machine. VSCode's solution works, but it's also not nearly as elegant or universally applicable, and is more prone to breaking.

Also, Tramp is still quite awful, sorry Emacs users (netrw isn't any better).


I agree that Tramp is not great, but there's a simpler solution that works much better: `watchexec` + `rsync`.

I can set it up to watch specific file paths, and sync exactly what I need. I'm still working on the local FS, so there's no editing delay, I can use all my local tooling, and the syncing takes milliseconds. I can also make it delete files remotely when they're deleted locally. And finally, I always have a local copy once I stop working on the remote machine, which is something I always needed to sync manually with Tramp. Plus, it's editor agnostic.

This VS Code feature would make me nervous, especially now that I know what it does.


I joined a new team that's very VSCode centric and I've been thinking a lot about why I still prefer vim. My latest observation is that I find all the toolbars and other content filling the screen to be visually noisy. I turned on Copilot and wow it's more toolbars and text is flying into where I'm trying to write. Vim just lets me looks at, think about, and write code. I can't get into "flow" with VSCode.

Oh and since people are citing age in sibling posts, I'm mid-30s. I used emacs in university and switched to vim at my first job. I used IntelliJ for a really gnarly Java project, but otherwise I keep using vim.


That's exactly what I dislike about vscode. It's so easy to trigger some pop-over that moves or obscures the precise thing you're trying to read.

The only thing really keeping me using it is copilot.


> likes to demagic tools through exploration, values completely open source, highly tinkerable community driven projects, etc.

I liked all this stuff when I coded as a hobby. Now that it’s my job, I like VSCode because I never need to tinker with it to any significant extent and I can focus on getting my work done. I fire up vim occasionally if I need to do some fancy regexp munging.


I don't tinker with Vim. I wrote my config many years ago and I touch it maybe once every few years. Otherwise it's completely rock solid.

I also find it highly amusing when people talk about needing to get work done and Vim, I mean I didn't get staff at one of the highest paying, hardest companies to climb at because I was sitting there tweaking configs instead of having impact. Vim lets me have impact by getting out of my way and letting me do precisely what I need to do wherever I need to do it.


I don’t doubt that Vim is an efficient tool for you. In the end it’s just a text editor. Editing text is not the hard part of writing code. I do not understand the impulse to judge people according to their largely inconsequential choice of editor (and I must say it seems to be disproportionately users of certain editors who are susceptible to it!)

The part of your post that I responded to talked about exploring highly tinkerable projects. I guess you’re saying that you like that Vim is explorable and tinkerable even though you don’t waste time exploring it or tinkering with it. That’s fair enough, but I hope you can at least see the sentiment that I was responding to.


The principal and distinguished engineers on my team used Vim and Emacs :P

I would imagine that is more due to age than anything else.

That is true

This must be your first hype cycle then. Most of us who are senior+ have been through these cycles before. There's always a 10% gap that makes it impossible to fully close the gap between needing a programmer and a machine doing the work. Nothing about the current evolution of LLMs suggests that they are close to solving this. The current messaging is basically, look how far we got this time, we will for sure reach AGI or full replaceability by throwing X more dollars at the problem.

So Work=0.1^Ct where C is the development pace. Everything points to the C of AI being large. How quickly does Work become a rounding error?

Sure, C=log(t), but it could also be C=ke^t. Everything to me feels like it's the latter, I really want to be wrong.


> So Work=0.1^Ct where C is the development pace.

Did you see the bit where he said "Most of us who are senior+ have been through these cycles before". They rolled out similar equations in previous hype cycles.

The LLM's were released about 3 years ago now. Over the weekend I made the mistake on taking their word on "does GitHub allow administrator to delete/hide comments on PR's". They convincingly said "no". Others pointed out the answer is of "yes". That's pretty typical. As far as I can tell, while their answers are getting better and more detailed, what happens when they reach the limits of their knowledge hasn't changed. They hallucinate. Convincingly.

That interacts with writing software in an unfortunate way. You start off by asking questions, getting good answers, and writing lots of code. But then you reach their limits, and they hallucinate. A new engineer has no way to know that's what happened, and so goes round and round in circles, asking more and more questions, getting complete (but convincing) crap in response, and getting nowhere. An experienced engineer has enough background knowledge to be able to detect the hallucinations.

So far, this hasn't changed much in 3 years. Given the LLM's architecture, I can't see how it could change without some other breakthrough. Then they won't be called LLM's any more, as it will be a different design. I'm have no doubt it will happen, but until it does LLM's are a major threat software engineers.


The thing is the total amount of work to do keeps increasing. We're putting firmware in lightbulbs now.

When everything includes software, someone needs to write and maintain that software.

If software becomes cheaper, we'll just use even more of it.


Cmon man, look at nature, exponential curves almost never are actually exponential. Likely it's the first part of a logistic curve. Of course you can sit here all day and cry about the worst outcome for an event in the long list of things no one can predict. It sounds like you've made your mind up anyways and refuse to listen to reason, so why keep replying to literally everyone here telling you that your buying into the hype too much.

You're young, and so we'll give you a pass. But as stated, _the entire point of tech is evolving methods_. Are you crying because you can't be one in a room of hundreds feeding punchcards to a massive mainframe? Why not? It's _exactly_ the same thing! Technology evolved, standards changed, the bar raised a bit, everyone still went to work just fine. Are you upset you won't have a job in a warehouse? Are you upset you aren't required to be a farmer to survive? Just chill out man, it's not as terrifying as you think it is. Take a page out of everyone who ever made it and try to actually listen to the advice of people who've been here a while and stop just reflex-denying any advice that anyone gives you. Expand your mind a bit and just consider the idea that you're actually wrong in some way. Life will be much easier, less frantic, and more productive


People keep telling students basically to “think happy thoughts” and are not being honest with them. The field is contracting today while more people with experience are chasing fewer jobs and then AI is hallowing out the low end.

Every single opening gets 1000s of applicants within the first day. It’s almost impossible to stand out from the crowd if you are either new to the industry or have a generic skillset.


Honestly I think moving up in a layer of abstraction is not the same as something resembling an intelligent agent.

If "resembling" intelligence was enough, all programmers would've been replaced long ago.

I've said this on here before, but replacing programmers means replacing our entire economy. Programming is, by and large, information processing. Guess how many business's services can be summed up as "information processing"? I'd wager most of them.

So maybe you're fucked, yes, but if so, we all are. Maybe we'll just have to find something to do other than exchange labor for sustenance...


Why all these approaches have not succeeded is that to close the gap, you have to backtrack on all the effort made so far. Like choosing a shortcut and stumbling on an impassable ravine. The only way is to go back.

As a systems programmer, Rust has won. It will take decades before there is substantial Rust replacing the absurd amounts of C that runs on any modern Unix system, but I do believe that our of all the replacements for C/C++, Rust has finally gained the traction most of them have lacked at the large companies that put resources behind these types of rewrites and exploratory projects.

I do not think Zig will see wide adoption, but obviously if you enjoy writing it and can make a popular project, more power to you.


I agree. It's not ideal but Rust is a genuine improvement across the board on C and C++. It has the inertia and will slowly infiltrate and replace those 2. It also has the rare capacity to add some new areas without detracting from the mainstay: It's actually good as an embedded language for the web and as a DSL. C/C++ definitely didn't have that.

Safe C++ could still be a genuine improvement on Rust - if only because the community would be larger by at least one order of magnitude compared to present-day Rust. Though you would also need a viable C++ epochs proposal to keep the complexity from becoming totally unmanageable.

I'm not convinced, actually. The problem is that every proposal for safe C++ that I've seen sacrifices compatibility with the broader C++ ecosystem. Yes, you could graft borrow checking onto a C++-like semantics, but what you would end up creating is an incompatible dialect of C++--essentially a new language. So the resulting ecosystem would actually be smaller than that of Rust.

I think this issue is a bit overstated. What people want is not to have interop with existing C++ code, but to just keep writing C++ in the same idiomatic style. And this just isn't feasible if you want to automatically ensure memory safety. It's especially problematic in larger codebases that can't be comprehensively surveyed, which is what people mostly want to use C++ for. So, something has to give.

Rust is a bit of a different story, because the clunkiness of Pin<> actually makes interop with C++ (and, to a lesser extent, C) surprisingly difficult in a way that might be amenable to improvement.


Many, after doing a review of Rust, say they don't like or will stop using it. It's very premature to declare it has "won", whatever that can be said to mean. Example, ThePrimeTime[1] (famous YouTube programmer) is another stating he does not like Rust anymore, and rather use some other language.

It appears part of the controversy surrounding Rust, is that many are of the opinion that it's not worth it because of the limited use case, poor readability, complexity, long compile times, etc... and that appears to be what certain advocates of Rust are not understanding or appreciating the difference in opinions. Rust is fine for them, specifically, but not for everyone.

[1]: https://www.youtube.com/watch?v=1Di8X2vRNRE


A big company I worked at actually deprecated the last of its Rust code last year. Maintaining Rust was much more expensive than predicted, and hiring and/or mentoring Rust Engineers proved even more expensive.

A simpler performant language like Zig, or a boring language + a different architecture would have been the better choice.


> A big company I worked at actually deprecated the last of its Rust code last year.

What replaced Rust?


A newer JVM language, which admittedly came with its own share of problems.

Is Zig really a simpler language? It has a ton of features!

"Simple language" is often used as shorthand for "a language in which it is simple to express ideas", rather than "a language with few features". A language can, in fact, have many features and still be simple(-to-write-in); or, like Go, it can have so few features that it is complex-to-write-in.

Rust has very real limitations and trade-offs. It compiles slow and the binaries are large. The compiler also makes performance sacrifices that makes it generally slower than C. I'm sure the language will continue to be successful, but it hasn't "won".

Why do you say slower than C? I’ve never seen a reason to believe they’re anything but roughly equivalent.

From my experience on C++ vs Rust on test algorithm. For a naive algorithm implementation rust is usually slightly faster than C++. But when you try to optimise stuff, it's the opposite. It's really hard to optimise Rust code, you need to put lots of unsafe and unsafe is not user-friendly. Rust also force you on some design that are not always good for performance.

The last I heard, Rust had issues with freeing memory when it wouldn't need to, particularly with short-lived processes (like terminal programs) where the the Rust program would be freeing everything while the C version would just exit out and let the operating system do cleanup.

Rust has ManuallyDrop, which is exactly the functionality you’re describing. It works just fine for those types of programs. The speed of the two is going to be largely dependent on the amount of effort that has gone into optimizing either one, not on some theoretical performance bound. They’re both basically the same there. There are tons of examples of this in the wild at this point.

ManuallyDrop seems overkill when you have Box::leak().

You may also be able to simply exit() before your allocations get dropped.

I maintain a Rust project that is ~50,000 loc [1]. I've never felt that compiling is slow, in the contrary it's always a pleasure to see how fast the project compiles (at least in debug).

In release, build time is longer but in this case, it's in the CI/CD so it doesn't bother me. We try to be very conservative with adding dependencies so it may help compilation time. Also I'm coming from a Java/Kotlin world so a lot of things appear like fresh air in comparison...

[1]: https://github.com/Orange-OpenSource/hurl


Zig might not become very popular, but IMO, it will become more popular than Rust. Zig is good at all the areas Rust is good at. Zig is also good at game development which Rust is not good at.

And Zig is better when integrating with C/C++ libraries.


> Zig is good at all the areas Rust is good at.

Not memory safety.

> Zig is also good at game development which Rust is not good at.

Let's see. Over just the past year in Bevy I've implemented GPU driven rendering, two-phase GPU occlusion culling, specular tints and maps, clustered decals, multi-draw, bindless textures, mixed lighting, bindless lightmaps, glXF support, skinned mesh batching, light probe clustering, visibility ranges with dithering, percentage-closer soft shadows, additive animation blending, generalized animation, animation masks, offset allocation, volumetric fog, the postprocessing infrastructure, chromatic aberration, SMAA, PBR anisotropy, skinned motion vectors, screen-space reflections, depth of field, clearcoat, filmic color grading, GPU frustum culling, alpha-to-coverage, percentage-closer filtering, animation graphs, and irradiance volumes. In addition to extremely rapid general engine progress, there have also successful titles, such as Tiny Glade.

Whether Rust can be a productive language for game development was an interesting question a few years ago. At this point the answer is fairly clear.


You cannot use yourself as an argument about the productivity of Rust, you designed this language!

Joke aside, the velocity of the Bevy Engine as a whole is indeed a testament to Rust productivity.

Last year I had two groups of students who built a multiplayer FPS and a Tower defense respectively after just one and a half days of Rust class so the learning curve is clearly not as bad as people like to tell on HN.


What are you doing all that for? Sounds like a ton of work. Like months of full time?

> but IMO, it will become more popular than Rust

While I wish this were true, I very much doubt it, at least not until Zig has proper interfaces and gives up its weird hangup on anonymous functions. It's also extremely easy to effectively lose access to Zig features without cluttering your code. For example, say you want to use libevent in Zig: your event callbacks must use C calling conventions, meaning you lose access to try and errdefer, which are two of the most defining features of Zig. And while you can remedy this by having the callback invoke a Zig function, doing that just doubles every interaction between your Zig code and libevent, which is already cluttered because of the lack of anonymous functions.

These things aren't as important as compile times, but they are annoyances that will drive a non-zero amount of people away.


It seems you are proving Zig will not become very popular, but not Zig will not become more popular than Rust.

I agree that Zig will not become very popular. It needs certain programming experiences to master it. But I'm quite sure it will become more popular than Rust.


There's a contradiction though, as Rust is already popular…

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: