Hacker News new | past | comments | ask | show | jobs | submit | ryanianian's comments login

For anyone curious about the vulnerabilities, this Ars article from November 2024 is a good read: https://arstechnica.com/information-technology/2024/11/micro...

std::move is just a cast operation. A better name might be std::cast_as_rvalue to force the overload that allows it to forward to move constructors/etc that intentionally "destroy" the argument (leave it in a moved-from state).


They don't destroy the argument - this is of course a big problem because the semantic programmers actually wanted (even when C++ 98 didn't have move and papers were proposing this new feature) was what C++ programmers now call "destructive move" ie the move Rust has. This is sometimes now portrayed as some sort of modern idea, but it actually was clearly what everybody wanted 15-20 years ago, it's just that C++ didn't deliver that.

What they go was this awful compromise, it's not destroyed, C++ promises that it will only finally be destroyed when the scope ends, and always then, so instead some "hollowed out" state is created which is some state (usually unspecified but predictable) in which it is safe to destroy it.

Creating the "hollowed out" new state for the moved-from object so that it can later be destroyed is not zero work, it's usually trivial, but given that we're not gaining any benefit by doing this work it's pure waste.

This constitutes one of several unavoidable performance leaks in modern C++. They're not huge, but they're a problem when you still have people who mistake C++ for a performance language rather than a language like COBOL focused intently on compatibility with piles of archaic legacy code.


Thanks for pointing this out. It's an absolute myth that C++ move semantics are due to backwards compatibility. The original paper on move semantics dating back 2002 explicitly mentions destructive move semantics by name:

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n13...

It does bring up an issue involving how to handle destructive moves in a class hierarchy, and while that's an issue, it's a local issue that would need careful consideration only in a few corner cases as opposed to the move semantics we have today which sprinkle the potential for misuse all over the codebase.


I started a new project recently and chose C++ because I wanted cross platform, and a language that let me write the highest performance code I could imagine. C is so lacking in abstractions, I don't think I can deal with it. But C++ is such a pain, I keep looking at Rust and feeling temptation. I'm doing some number crunching, and geometric algorithms, among other things. Not sure if Rust is as good as C++ there.


I'm the wrong person to ask probably because for me Rust seemed like home almost immediately and that's not most people's reaction.

The brute optimisation for Rust is being done by LLVM, just like if you used Clang to compile C++, so your pure number crunching ought to be fine. If anything you may find it's easier to end up correctly expressing the thing you meant with good performance in Rust. If you rely on a C++ library of geometric algorithms, clearly "I can't find an equivalent in Rust" would be a showstopper and so it's worth stopping past crates.io to try a few searches for whatever keywords are in your head

Also, if you know that learning new stuff fogs up your process, you might not want to try to both learn Rust and work on this novel project simultaneously. Some people thrive pairing learning a language with a new project, others hate that and would rather pick, either do something old in a new language, or do something new in an existing one.

If you decide this isn't the right time but keep feeling a twinge, I encourage you to try it for something else, not everybody is going to like Rust, but it's a rare C++ programmer who spends serious time learning Rust and then decides there was nothing they valued from the experience -- particularly if you have no experience in an ML (F# or Ocaml are modern examples)


Thanks. I've learned a lot of languages and enjoy doing it, especially when much of it is a step up, so not a problem there. I may need to just dive in and try it out on a larger project. It was only after doing that with C++ where I really understood what I liked and what I didn't. A lot of the latter is the tooling/IDEs, which doesn't show up reading about the language. One thing I'm not sure about with Rust is porting a UI class hierarchy from C++. Base class `View`, sub classes `Button`, `VStack`, `TextField`, etc. I see how to replace virtual functions with a trait and impls for the various types. But for stuff (fields or methods) shared in the base class, this looks like one area where Rust is uglier than C++.


Even zooming on desktop (firefox on macos) is broken. I want to zoom in to see the street names and investigate the images, but the site makes it impossible. I can download the images to my desktop, but they are low resolution. What a cool project to be soured by such awful technology that didn't need to exist in the first place.


This is fun. The UI and concept are well-executed.

I wish it did farnsworth timing, though. The idea is that the individualy characters play at full 30 words per minute speed, but the characters are spaced to be at your target listening rate.

You want to hear each letter as a distinct sound rather than hearing individual dits and dahs. The added time between each character with farnsworth timing gives your audio/memory system time to make the connection rather than slowing the whole character down so you have to remember that dit-dit-dah is U.

I can typically hear at about 30 wpm with farnsworth timing. It is, for me, much harder to hear when slowed down to 10 wpm with the dits and dahs slowed way down.

It has taken a few months of practice. I'm still too nervous to use it on the ham bands aside from scheduled chats with friendly operators. My favorite way to learn has been guided courses with experienced CW (morse code) operators from cwops.


Obligatory warning that SSRI withdrawals can be extremely dangerous.

I was on Lexapro for a minute. It worked for a bit, but then I started to not care about anything. A certain amount of anxiety/emotional swing is important for my humanity, as I found out. I really wanted to get off that stuff. But my doctor insisted that I ween off of it by reducing my dosage over a period of 2 months. I'm glad I listened. I could acutely feel each reduction.


It is largely unenforced by the FCC directly, but ham operators can (and do) use directional antennas to find you in many cases. Once reported the FCC does take violations seriously.


> workflow engines are somewhat of a design smell

Probably so, but the real design smell seems to be thinking of a workflow engine as a panacea for sustainable business process automation.

You have to really understand the business flow before you automate it. You have to continuously update your understanding of it as it changes. You have to refactor it into sub-flows or bigger/smaller units of work. You have to have tests, tracer-bullets, and well-defined user-stories that the flows represent.

Else your business flow automation accumulates process debt. Just as much as a full-code-based solution accumulates technical debt.

And, just like technical debt, it's much easier (or at least more interesting) to propose a rewrite or framework change than it is to propose an investment in refactoring, testing, and gradual migrations.


Sometimes you really do want exactly that "random" string. This is common with error messages, model numbers, build hashes, etc. If I'm searching for B9GDSIGH as the model number for my refrigerator, I really don't want to see B9GDSIGY.


But if it links to the B9GDSIG series refrigerator, which has the 240v H and 120v Y subtypes, then it would be correct in suggesting that?

Same with error messages - they often have timestamps, or local object IDs/memory addresses, which you also want to be fuzzy-matched.

I think the issue is the de-emphasis of "power" modifiers for google - it's less obvious how to say "This part of the string needs exact match, this can be fuzzy"


In that case, click the "must contain" link and it resubmits with the query wrapped in quotes. Or, just quote the query yourself on the first go if you know it must match


Google no longer (hasn't in a while) respected quotes. It's very hard to get Google to actually say there aren't any results even when in fact there are no matching results.


They respect it when they submit it then, as every time I've used that function to see them update the query with quotes it comes back with different results. I've never cared to look at the search query in the URL, so maybe they also add and additional parameter that tells the back end specifically to obey the quotes on this resubmitting???? So at some point, the quotes aren't ignored


that's not my experience.

https://www.google.com/search?q=%22kgirbudidndijrjjr%22 gives me "Your search - "kgirbudidndijrjjr" - did not match any documents.", at least it will until they index this comment and find kgirbudidndijrjjr


Quotes are more like guidelines these days.


on the advanced search, there's still the option to specify that it 'must contain' something, but I'm not sure if it's just a suggestion like quotes or not.


I "love" how we've reached a point where we so distrust this company specifically but dark pattern UIs in general where we almost anticipate placebo like buttons.


Humans are really good pattern matchers. We can formalize a problem into a mathematical space, and we have developed lots of tools to help us explore the math space. But we are not good at methodically and reliably exploring a problem-space that requires NP-complete solutions.


(Not an AI researcher, just someone who likes complexity analysis.) Discrete reasoning is NP-Complete. You can get very close with the stats-based approaches of LLMs and whatnot, but your minima/maxima may always turn out to be local rather than global.


maybe theorem proving could help? ask gpt4o to produce a proof in coq and see if it checks out...or split it into multiple agents -- one produces the proof of the closed formula for the tape roll thickness, and another one verifies it


I had the thought recently that theorem provers could be a neat source of synthetic data. Make an LLM generate a proof, run it to evaluate it and label it as valid/invalid, fine-tune the LLM on the results. In theory it should then more consistently create valid proofs.


Sure, but those are heuristics and feedback loops. They are not guaranteed to give you a solution. An LLM can never be a SAT solver unless it's an LLM with a SAT solver bolted on.


I don't disagree -- there is a place for specialized tool, and LLM wouldn't be my first pick if somebody asked me to add two large numbers.

There is nothing wrong with LLM + SAT solver -- especially if for an end-user it feels like they have 1 tool that solves their problem (even if under the hood it's 500 specialized tools governed by LLM).

My point about producing a proof was more about exploratory analysis -- sometimes reading (even incorrect) proofs can give you an idea for an interesting solution. Moreover, LLM can (potentially) spit out a bunch of possibly solutions and have another tool prune and verify and rank the most promising ones.

Also, the problem described in the blog is not a decision problem, so I'm not sure if it should be viewed through the lenses of computational complexity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: