Hacker News new | past | comments | ask | show | jobs | submit | macgyverismo's comments login

The comment in your frobulate code suggests one is to forbulate, yet reading The code it is very clear that there will only be frobulation happening in the outlined proceedings. Perhaps a minor revision is required.

can you show me how rust does this? I'm genuinely curious. I've made a toy example to show how c++ checks for undefined behavior at compile time, I am unaware of rust being able to do the same without runtime costs (however small they may be, this is a toy example after all) https://godbolt.org/z/cT9bqz8z7


The point is that Option in Rust doesn't have undefined behavior in any case, even if the values aren't known at compile time. Exhaustiveness is always checked at compile time, unlike C++ where operator* offers an escape hatch where nothing is checked in non-constexpr contexts.

"Make everything constexpr" isn't a real solution to UB, in the same way that "make all functions pure" isn't a solution for managing side effects.

Not adding UB to your APIs, on the other hand, is a real solution.


You can actually implement the C++ behavior, if you want:

    unsafe fn super_unwrap<T>(x: Option<T>) -> T {
        match x {
            Some(val) => val,
            None => unreachable_unchecked!(),
        }
    }

But defaults matter, and Rust certainly doesn’t make this kind of thing ergonomic (which is a correct decision on the Rust designers’ part).


You don't have to write this, it already exists as the (unsafe of course) method Option::unwrap_unchecked

Because all Rust's methods can be called as free functions, you can literally write Option::unwrap_unchecked for the same behaviour, or you can some_option.unwrap_unchecked() (in both cases you will need to be in unsafe context for this to be allowed and should write a SAFETY comment explaining why you're sure it's correct)


I see. I didn't know that method existed despite spending ~4.5 years writing Rust.


Ha, same. I very very rarely write code in unsafe contexts which is why, I guess.


Yeah, absolutely. My point is that Option itself doesn't give you this API and to make an unsafe version, you have to explicitly write it.

Including UB in easy to misuse places is totally unnecessary and a footgun which really does cause issues in real code.


Yep, I agree completely. Just wanted to point out for completeness that rust can theoretically do the same thing as c++.


Compile time checked pattern matching: https://doc.rust-lang.org/book/ch18-03-pattern-syntax.html


That matches the 'static_assert' portion of my sample code. The implied claim of the parent I replied to was that rust could do this even for runtime values, such as the one I am using in the main of my sample. In c++ it is the same function running both the compile time check and the unchecked runtime variant, so there is zero overhead at runtime. I can't possibly think of a way how rust would be able to make the same code in my sample safe without adding runtime checks. If I am mistaken here I sure would like to know.


You’re correct. Rust can’t statically prove which enum variant is inhabited. You do need a runtime switch, the difference is (at least in safe code) it statically forces you to indeed do that runtime switch.


You aren't mistaken. I should've written "runtime overhead" - my point is that there is no runtime performance penalty for getting rid of the UB in the Option API.

An equivalent API with no UB is just strictly better.


This borrow checker runs at runtime, which I find not as interesting. Everything starts to look a lot like std::unique_ptr which I think is mostly unneeded as it ads pointer indirection.

Could someone explain to me when one would use this? Is it for educational purposes perhaps?


I don't think it is intended to be used in a real system, this was more of an experiment to see what was possible. C++ as a language isn't well-suited to supporting a compile-time borrow checker. The difficulty of retrofitting C++20 modules to the language is probably just a glimmer of the pain that would be involved in making a borrow checker work.

There is a place for runtime borrow checking. Some safe cases in well-designed code are intrinsically un-checkable at compile-time. C++ is pretty amenable to addressing these cases using the type system to dynamically guarantee that references through a unique_ptr-like object are safe at the point of dereference. Much of what the borrow checker does at compile-time could potentially be done at runtime with the caveat that it has an overhead.

This has more than a passing resemblance to how deadlock-free locking systems work. They don't actually prevent the possibility of deadlocks, as that may not be feasible, but they can detect deadlock conditions and automatically edit/repair the execution graph to eliminate the deadlock instance. If a deadlock occurs in a database and no one notices, did it really happen?


Hey, I am the author of this, I made this mostly for the purpose of experimenting and playing around and trying out things rather than actually using this for production projects. Making a proper compile time checker is pretty complicated(possibly impossible) without actually getting into the compiler, this just intends emulate that behavior to some extent and have a similar interface. "educational purposes" -> well kinda, I had some free time and had an interesting idea perhaps


> pretty complicated(possibly impossible)

Rust does it at compile time, so why cant C++? to me this detail completely kills the usefulness of this project


C++ cannot because it does not have the necessary information present in its syntax. It’s really that simple. C++ could add such syntax, but outside of what Circle is doing, I’m not aware of any real proposal to add it.

Also, Google (more specifically, the Chrome folks) tried to make it work via templates, but found that it was not possible. There’s a limit to template magic, even.


Although it's not as extensive as Rust's lifetime management, Nim manages to infer lifetimes without specific syntax, so is it really a syntax issue? As you say, though, C++ template magic definitely has its limits.


Nim has a garbage collector.

That said, you're right on some level that it's truly semantics that matter, not syntax, but you need syntax to control the semantics.


Nim is stack allocated unless you specifically mark a type as a reference, and "does not use classical GC algorithms anymore but is based on destructors and move semantics": https://nim-lang.org/docs/destructors.html

Where Rust won't compile when a lifetime can't be determined, IIRC Nim's static analysis will make a copy (and tell you), so it's more as a performance optimisation than for correctness.

Regardless of the details and extent of the borrow checking, however, it shows that it's possible in principle to infer lifetimes without explicit annotation. So, perhaps C++ could support it.

As you say, it's the semantics of the syntax that matter. I'm not familiar with C++'s compiler internals though so it could be impractical.


I did not hear that Nim made ORC the default, thanks for that!

I still think that my overall point stands: sure, you can treat this as an optimization pass, but that kind of overhead isn't acceptable in the C++/Rust world. And syntax is how you communicate programmer intent, to resolve the sorts of ambiguous cases described in some other comments here.

I am again reminded of escape analysis https://steveklabnik.com/writing/borrow-checking-escape-anal...


> Where Rust won't compile when a lifetime can't be determined, IIRC Nim's static analysis will make a copy (and tell you), so it's more as a performance optimisation than for correctness.

Wait, how does that work? For example, take the following Rust function with insufficient lifetime specifiers:

    pub fn lt(x: &i32, y: &i32) -> &i32 {
        if x < y { x } else { y }
    }
You're saying Nim will change one/all of those references to copies and will also emit warnings saying it did that?


It will not emit warnings saying it did that. The static analysis is not very transparent. (If you can get the right incantation of flags working to do so and it works, let me know! The last time I did that it was quite bugged.)

Writing an equivalent program is a bit weird because: 1) Nim does not distinguish between owned and borrowed types in the parameters (except wrt. lent which is bugged and only for optimizations), 2) Nim copies all structures smaller than $THRESHOLD regardless (the threshold is only slightly larger than a pointer but definitely includes all integer types - it's somewhere in the manual) and 3) similarly, not having a way to explicitly return borrows cuts out much of the complexity of lifetimes regardless, since it'll just fall back on reference counting. The TL;DR here though is no, unless I'm mistaken, Nim will fall back on reference counting here (were points 1 and 2 changed).

For clarity as to Nim's memory model: it can be thought of as ownership-optimized reference counting. It's basically the same model as Koka (a research language from Microsoft). If you want to learn more about it, because it is very neat and an exceptionally good tradeoff between performance/ease of use/determinism IMO, I would suggest reading the papers on Perseus as the Nim implementation is not very well-documented. (IIRC the main difference between Koka and Nim's implementation is that Nim frees at the end of scope while Koka frees at the point of last use.)


Oh, that's interesting. I think not distinguishing between owned and borrowed types clears things up for me; it makes a lot more sense for copying to be an optimization here if reference-ness is not (directly?) exposed to the programmer.

Thanks for the explanation and the reading suggestions! I'll see about taking a look.


> It will not emit warnings saying it did that.

You're right. I was sure I read that it would announce when it does a copy over a sink but now I look for it I can't find it!

> The static analysis is not very transparent.

There is '--expandArc' which shows the compile time transformations performed but that's a bit more in depth.


I'm pretty sure you could embed a language with lifetimes in a dsl built with c++ templates. You wouldn't want to use it beyond toy programs though.


Maybe, but nobody has demonstrated that it's actually possible. And even then, toys are fun, but still, at the end of the day, not good enough.


Of course, it would be completely impractical. Nobody has demonstrated it because they were interested in a practical solution.


Well thats how the current C++ compilers/standard is. There is a limit to what a header/library can do


> pretty complicated(possibly impossible) without actually getting into the compiler


I think it's more an "can i do this" project, rather than a product that can be used in prod


> Could someone explain to me when one would use this?

For memes, obviously.

Me: I want Rust!

Tech lead: We have Rust at home!

Rust at home: rusty.hpp


> Could someone explain to me when one would use this? Is it for educational purposes perhaps?

The goal/why is, as almost always, explained in the README:

> rusty.hpp as the time or writing this is a very experimental thing. Its primary purpose is to experiment and test out different coding styles and exploring a different than usual C++ workspace.

TLDR: it's a experiment


> Everything starts to look a lot like std::unique_ptr which I think is mostly unneeded as it ads pointer indirection.

Interesting, why is this? I would have assumed the compiler could have optimized away that indirection.

[1] https://godbolt.org/z/9Pqqqz5a7



Rust does "borrow checking at runtime" with RefCell<>.


right, but RefCell is optional. if you dont use that, you get checking at compile time.


I feel your pain. For me the biggest hurdle was taken away once I realized the following 3 things:

- everything in Rust-land is named differently. They are not different, but use different names.

- nothing wants to be a value

- nothing is implemented by default.

If you can get those 3 items in your brain, you can start thinking in Rust. Coming from one c++ dev to another; You may still not like the language. I know I dont, even though I agree with the premise that defaults should be ‘safe’


I've had an electric one (heater in the base) for more than 15 years and it recently broke in a way that was not repairable. I really want to buy a new electric one with a reasonable volume, but they seem to be very rare. I'm thinking of brazing a heater element to a normal one, the automatic shut-off feature and consistent temperature profile is a must-have for me.


I started using this library last week. It was easy to get started and running. When I wanted to do stuff beyond the provided example I found nothing worked as I had expected. For example, you supply a user pointer when you create a lws_context. There is a user pointer argument in the callback where you do all the actual work. One would expect these to be the Same, but they are not. Instead you have to use two different calls to get from your current connection to the user pointer which is set for all connections. Did it work? Yes. But it was very suprising behavior. Another problem I ran into was getting the event loop to be non-blocking for use in a coroutine. Appearently I was expected to use one of the preselected event libraries, which I was not. I was expected to implement a dozen or so callbacks for which there was little or no helpfull documentation. Eventually I found that there was a hack where I could pass -1 to the timeout parameter. Now the service call was non blocking. It would have been fine if the call had blocked for a millisecond or so, but that did not work. So the time-out parameter was either -1 or any other value. I kept bumping into suprises like this. The library solved a problem for me, so I will continue to use it. Unfortunately I cant recommend it to anyone unless you are willing and able to spend a good portion of your time getting this beast tamed.


But that "just" sounds like lacking documentation and a bit of missing support for you specific use case, or was anything actually broken or implemented in a completely stupid way? I didn't have to use websockets in C yet, but it would be good to have something ready in case I do at some point. Having to dig through source code instead of docs is something I'm used to. Are there any other contenders you looked at?


I've had a very similar experience a few weeks ago and even fell for the same double-userdata misunderstanding.

The documentation itself is actually good and extensive, however it feels like the authors expect its users to deeply understand the library itself. It is also harder to find some "Getting Started" docs as most provided examples felt way too bloated coming from a nodejs/ws background. Compared to the other library[1] I was considering, it took much longer to get even a simple "echo" server running.

However having used lws for some time now, I am really happy with it! The API is very clean, mostly intuitive and provides everything you need, without feeling bloated or becoming too verbose. Sometimes documentation still feels a bit harder to find, but it can be figured out eventually. One great feature for me was being able to accept both WebSocket as well as raw TCP connections on the same port, this is extremely easy and just required settings the flag LWS_SERVER_OPTION_FALLBACK_TO_RAW.

I encountered other hiccups. They are fully documented and completely valid, but were really confusing to me as a first-time user:

* Sending data requires a specific memory layout[2] – namely having to allocate memory for the websocket header youself before the actual message you want to send. This gave me confusing segfaults in the beginning.

* Sending data / responding to a client message will probably (but not always) fail when just naively using "lws_write()". To correctly send data you need to manually queue your message data, request the "ON_WRITABLE" callback[3] and only then actually send.

[1]: https://github.com/Theldus/wsServer

[2]: https://libwebsockets.org/lws-api-doc-v3.0-stable/html/group... (see the "IMPORTANT NOTICE")

[3]: https://libwebsockets.org/lws-api-doc-main/html/group__callb...


I found the same with this library. I'm not sure if it's missing documentation, unintuitive design or both, but I always found it a bit of a struggle to use beyond simple cases.


I have played with this idea on and off over the last few years. For c++ mind you. I have started a few times writing such a thing and each time ended up concluding I was making it way too complex. I wholehartedly agree with the idea/concept, but the implementation feels to complex for my taste.


There is work done one this. search for: software defined microwave (SDC)


Super cool. Thanks for the suggestion found this [0] but there appears to be more out there.

[0] https://www.hcii.cmu.edu/news/software-defined-cooking-using...


This was very much in line with my own thinking, or at least that is what I suspect and can’t know for sure.

With the recent AI buzz I got to think, maybe the keep from the story are higher level concepts that exist only after the underlaying layers have been trained and can therefore not be adressed before.


I’ve been using this technique as well, but I found that debugging static_asserts is quite hard. I often fall back to calling the failing test at runtime and stepping through. Any suggestions for a different workflow?


IMHO the best approach is to avoid the problem by applying TDD. Then there is very little need to debug anything. But otherwise, there is https://github.com/mikael-s-persson/templight for compile-time debugging which is pretty cool and having something like `expect(auto... args) static_asert(args...); assert(args...);` may help with being able to debug at run-time and get the coverage (though, the code has has to compile aka pass first).


Keep tests and code separate, and use tdd: https://github.com/yellowdragonlabs/TDD

I usually have dozens/hundreds of tests that run faster than you can compile #include <vector>.

A shell script executes all tests automatically every time I save. It's very nice to watch the output while coding, with almost no latency.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: