Hacker News new | past | comments | ask | show | jobs | submit | 59nadir's comments login

A lot of our Odin procs take an allocator as a required argument specifically so they force you to choose one at the call site, because yes, often you want it to be explicit.

I am thinking now a language that had the default allocator be a GC, but you could change it to something else using implicits/context and this is enfoced everywhere (e.g.: there is no way to allocate memory "from magic", you need to call the allocator, even if it is from context).

At least thinking in Scala as a base syntax here, it would mean:

    def foo(name: String)(implicit allocator: MemoryAllocator): StringBuf = {
      val buf = allocator.new(100, String)
      buf.append("Hello ")
      buf.append(name)
      buf
    }

    println(foo("John")) // implicit allocator from context, default to GC, no need to free

    implicit val allocator: MemoryAllocator = allocators.NewAllocator()
    println(foo("Alice")) // another implicit allocator, but now using the manual allocator instead of GC
    defer allocator.free()

    val explicitAllocator = allocators.NewArenaAllocator()
    foo("Marie")(explicitAllocator) // explicit allocator only for this call
    defer explicitAllocator.free()

I don't think there is any language right now that has this focus, since Odin seems to be focused in manual allocations.

Rust is a bad choice for web and a meh/bad one for game development. It's certainly not good enough at either to have much of a technical edge over Ada.


There are worse choices in the compiled web language niche. Most of the competitors don't qualify, because people don't want to do formal verification for their side projects.


I work in a two man team making software that is 500-1000 times faster than the competition and we sell it at ~40% of their price. Granted, this is in a niche market but I would be very careful in stating price/costs are the entire picture here. Most developers, even if you suddenly made performance a priority (not even top priority, mind you), wouldn't know how to actually achieve much of anything.

Realistically only about 5% or so of my former colleagues could take on performance as a priority even if you said to them that they shouldn't do outright wasteful things and just try to minimize slowness instead of optimizing, because their entire careers have been spent optimizing only for programmer satisfaction (and no, this does not intrinsically mean "simplicity", they are orthogonal).


If you really think what you're doing can be easily generalized then you're leaving a lot of money on the table by not doing it.


Generalized? Probably not. Replicated to a much higher degree than people think? I think so. It wouldn't matter much to me personally outside of the ability to get hired because I have no desire to create some massive product and make that my business. My business is producing better, faster and cheaper software for people and turning that into opportunity to pay for things I want to do, like making games.

Disclaimer: Take everything below with a grain of salt. I think you're right that if this was an easy road to take, people would already be doing it in droves... But, I also think that most people lack the skill and wisdom to execute the below, which is perhaps a cynical view of things, but it's the one I have nonetheless.

The reason I think most software can be faster, better and cheaper is this:

1. Most software is developed with too many people, this is a massive drag on productivity and costs.

2. Developers are generally overpaid and US developers especially so, this compounds for terrible results with #1. This is particularly bad since most developers are really only gluing libraries together and are unable to write those libraries themselves, because they've never had to actually write their own things.

3. Most software is developed as if dependencies have no cost, when they present some of the highest cost-over-time vectors. Dependencies are technical debt more than anything else; you're borrowing against the future understanding of your system which impacts development speed, maintenance and understanding the characteristics of your final product. Not only that; many dependencies are so cumbersome that the work associated with integrating them even in the beginning is actually more costly than simply making the thing you needed.

4. Most software is built with ideas that are detrimental to understanding, development speed and maintenance: Both OOP and FP are overused and treated as guiding lights in development, which leads to poor results over time. I say this as someone who has worked with "functional" languages and FP as a style for over 10 years. Just about the only useful part of the FP playbook is to consider making functions pure because that's nice. FP as a style is not as bad for understanding as classic OOP is, mind you, but it's generally terrible for performance and even the best of the best environments for it are awful in terms of performance. FP code of the more "extreme" kind (Haskell, my dearest) is also (unfortunately) sometimes very detrimental to understanding.


I think I'd need a lot more than one claim with no evidence that this is true but I appreciate the thoughtful response all the same.

Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.


I don't think either of us needs to convince the other of anything, I'm mostly outlining this here so that maybe someone is actually inspired to think critically about some of these things, especially the workforce bits and the cost of dependencies.

> Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.

This seems like a generous take to me, but I suppose it's usually better for the soul to assume the best when it comes to people. Companies with explicit performance requirements will of course self-select for people capable of actually considering performance (or die), but I don't take that to mean that the rest of the software development workforce is actually able to, because I've seen so, so many examples of the exact opposite.


I guess I just see so many of these "the whole industry except for me is doing it wrong" kinds of posts but almost never see anybody really back it up with anything concrete that I've grown extremely skeptical.


It's definitely your environment, Rust isn't nearly as popular for actual projects getting done as it is in (for example) StackOverflow's survey. C, C++, Odin and Zig are all languages where whatever C knowledge you have right now can trivially be put to use to do things, immediately, I think it makes perfect sense that unless someone is just really into doing Rust things they'll just take the trivially applicable ways of working and use the aforementioned languages (which all have much better support for batch memory management and custom allocators than Rust, it should be added).


There are a lot of C programmers who do not like or are not fond of Zig or Odin. For highly proficient C and C++ programmers, many consider them not worth their time. Not even counting, that both Zig and Odin are in beta. It's an investment to become as proficient in the other language.

An example is Muratori from handmade hero, who publicly stated he dislikes Zig and pretty much sees it as a confused mess[1]. He thinks Odin is kind of OK, because it has metaprogramming (which other languages do too), but is way below Jai. Between Jai and Odin, he thought Jai was the one with potential and could get him to switch/use. Odin's slight popularity, as a kind of Jai-lite, is often attributed to Jai being a closed beta. But, it appears that might change soon (Jai going public or opening up beta use more), along with an upcoming book on Jai.

Programming languages like Go and V (Vlang), have more of a use case for new programmers and general-purpose, because they are easier to learn and use. That was the point of their creation.

[1]: https://www.youtube.com/watch?v=uVVhwALd0o4 (Full Casey Muratori: Language Perf and Picking A Lang Stream)


I would suggest linking to a timestamped point in the video instead of the entire thing. With regards to Odin having metaprogramming this is either a misunderstanding on your part of what was said or a misunderstanding from Casey Muratori with regards to Odin; Zig has a much stronger claim to having metaprogramming with `comptime` being as powerful as it is, whereas Odin only scarcely can be said to have a limited "static if" in the `when` keyword, plus your average parametric polymorphism via template parameters.

C programmers (and C++ programmers) who don't like Zig or Odin can simply use C (or C++), it's not really the case that they have to switch to anything, much less something as overwrought as Rust. My point is that it is natural for someone who is at any proficiency level above 0 in C to want to apply that proficiency in something that allows them to make use of those skills, and C, C++, Odin and Zig all allow them to do that, pretty much immediately.

You contend that "It's an investment to become as proficient in the other language" but I would argue that this is a very wide spectrum where Rust falls somewhere in the distance as opposed to all of the languages I mentioned instead being far closer and more immediately familiar.


True that Odin is admittedly quite limited in that area. Muratori made the case that Jai was far beyond both Zig and Odin, thus recommending its use and what made it attractive for him. The other issue, is investing in languages that are in beta.

The counter, in regards to "any proficiency above 0", is there are other alternative languages (that are easier to learn and use) where knowledge of C can be put to immediate use too. Getting up to speed in those languages, would likely be much faster, if not more productive for less proficient or experienced programmers.


> The other issue, is investing in languages that are in beta.

I work full time with Odin and I wouldn't put too much stress on this point as I've found that whatever issues we have with Odin are simply issues you'd have with C for the most part, i.e. C knowledge will help solve it. It's not a complex language, it doesn't need to do much and language-specific issues are few and far between because of it. Tooling is a bit of a mixed bag but it's about 80% solved as far as we're concerned.

Odin day-to-day for working on 3D engines (which I do) is not beta quality, whatever that means. I would rate it a better net fit for that purpose than I would C++, and I wrote the kind of C++ you would write for engine work for about 10-11 years (at least what we knew to be that kind of C++ at around 2001). For server software I'd rate it worse, but this is mostly because there are a few missing FFI wrappers for things that usually come up, it's not really because of the language per se.

As for Jai, I can't claim to know much of anything about it because I've never used it, and I wouldn't take many peoples' word for much of anything about the language. As far as I'm concerned Jai will exist when it's publicly available and until then it's basically irrelevant. This is not to disparage the effort, but a language is really only relevant to talk about once anyone with motivation and an idea can sit down, download/procure it and use it to build something, which is not the case with Jai.


> It's not a complex language

You cannot claim this when even codebases with veteran C developers will be virtually guaranteed to have memory safety issues at some point.


My comment was addressing the idea that Odin is a "beta language" in comparison to C and my assertion was that the issues we have with using Odin are the same ones you'd have with C. So assuming that you have accepted C-style issues, then Odin really does not present many or common additional roadblocks despite being relatively young.

Also, I know this may shock a lot of people who don't regularly use sharp tools: The possibility (and even occurrence) of memory safety issues and memory corruption doesn't tank a project, or make it more costly than if you'd used a GC, or Rust. Memory bugs range from very easily avoided and solved to surprising and very, very hard to solve.


> The possibility (and even occurrence) of memory safety issues and memory corruption doesn't tank a project, or make it more costly than if you'd used a GC, or Rust. Memory bugs range from very easily avoided and solved to surprising and very, very hard to solve.

I never said this. The Linux kernel and any other project will thrive with or without Rust. But it is a certainty that the higher the ratio of C-to-Rust in the world (or well, any memory-safe systems language), the higher the amount of CVEs we will see. It would seem reasonable to me to assume that we want as few CVEs as possible.

To be honest, I also don't understand the dogmatic stance a lot of C developers have against Rust. Yes, it does cause some friction to learn a new language. But many of the correct idioms Rust enforces are things you should be doing in C anyway, its just that the compiler is a lot more strict instead of just nagging you, or not even nagging at all.


1000 lines of perfectly inoffensive and hard to argue against code that you don't need because it's not the right solution is negative velocity. Granted, I don't think that's much worse with LLMs but I do think it's going to be a growing problem caused by the cost of creating useless taxonomies and abstractions going down.

That is to say: I think LLMs are going to make a problem we already had (much) worse.


I stopped using AI for code a little over a year ago and at that point I'd used Copilot for 8-12 months. I tried Cursor out a couple of weeks ago for very small autocomplete snippets and it was about the same or slightly worse than Copilot, in my opinion.

The integration with the editor was neat but the quality of the suggestions were no different than what I'd had with Copilot much earlier, and the pathological cases where it just spun off into some useless corner of its behavior (recommending code that was already in the very same file, recommending code that didn't make any sense, etc.) seemed to happen more than with Copilot.

This was a ridiculously simple project for it to work on, to be clear, just a parser for a language I started working on, and the general structure was already there for it to work with when I started trying Cursor out. From prior experience I know the base is pretty easy to work with for people who aren't even familiar with it (or even parsing in general), so I think given the difficulties that Cursor had even putting together pretty basic things it might be that a user of Cursor would see minimal gains in velocity and end up having less understanding in the medium to long term, at least in this particular case.


The cursor auto complete? It's useful but the big thing is using sonnet with it.


I tried it with Claude Sonnet 3.5 or whatever the name is, both tab-completed snippets and chat (to see what the workflow was like and to see if it gave access to something special).


This means nothing if you host it on non-US servers. No one would take it seriously internationally.


Can't scroll on desktop Firefox, the entire site goes black in Edge. I'm sure this would work in normal Chrome, but why would I even care to try it at this point? If I can't even open the framework's page I think it's worth a skip.


What was the language and proposed architecture? Nothing is quite as grating as being hired for work on an Erlang/Elixir code base and seeing basically no correct usage of the platform and it'd definitely result in an instant suggestion to start correctly leveraging Erlang, piece by piece. It's not even a question of architecture as much as it is completely ignoring what makes the platform good, IMO.


It was all c++. The existing architecture consisted of a number of different executables, which were mostly statically linked (except for stuff like the C and C++ runtime libs). His proposal was to factor out the business logic from each executable into a shared library, and then (if memory serves) we would only need a single executable that could work with any of these new shared libraries. Like a plugin architecture.

This would mean that when rolling out a new version, usually only the shared library would need to be replaced. So.. significantly more complexity, due to now having a more arms-length interface between the executable and the business logic in the shared libraries. And the payoff was.. when deploying, we would upgrade a shared library instead of an executable? Still doesn't seem worthwhile.


Only in the sense that you seem to not understand how terrible big companies, big teams and big code bases are for efficiency and productivity. It pushes the bar for reaching desired results way up, and the time it takes to get there even more. No one should ever want to be part of organizations, teams or code bases like this, for their own sake.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: