Hacker Newsnew | past | comments | ask | show | jobs | submit | gnull's commentslogin

Those descriptors like 5 could be mapped to anything, including descriptor 1, stdout.


That's such a user-hostile design decision. I can't fathom what justifies it (other than kinky taste).

Makes your commands unreadable without a manual, leaves a lot of room for errors that are quietly ignored. And forces you into using a shell that comes with its own set of gotchas, bash is not known to be a particularly good tool for security.

And to those who stay this adds flexibility: it doesn't. Those file descriptors are available under/dev/fd on linux, with named options you can do --pk /dev/fd/5. Or make a named pipe.


> Those file descriptors are available under/dev/fd on linux, with named options you can do --pk /dev/fd/5.

If you have a procfs mounted at /proc and the open syscall to use on it, sure (and even then, it’s wasteful and adds unnecessary failure paths). Even argument parsing is yet more code to audit.

I think the design is pretty good as-is.


It's 2025, dude. You can't be seriously telling me how difficult it is to parse arguments. It may be difficult in C, but then we're down another sick rabbit hole of justifying bad interface with bad language choice.

One open syscall in addition to dozens already made before your main function is started will have no observable effect whatsoever.


The context is what’s essentially a shell-accessible library for a minimal set of cryptographic primitives. It’s very reasonable to want it to be as lightweight, portable, and easy to audit as possible, and to want it to run in environments where (continuing on Linux for example) the open syscall to /dev/fd/n -> /proc/self/fd/n will not succeed for whatever reason, e.g. a restrictive sandbox.

Not involving argument parsing simplifies the interface regardless of how easy the implementation is, and the cost is just having to look up a digit in a manual that I certainly hope anyone doing raw ed25519 in shell is reading anyway.


Make a named pipe then. Shells have built-in primitives for that. I.e. <() and >() subshells in bash, or psub in fish. Or have an option to read either a file descriptor or a file.

I can't understand why you keep inflating the difficulty of simple commandline parsing, which the tool needs to do anyway — we shouldn't even be talking about it. Commandline parsing code is done once (and read once per audit) while a hostile user interface that bad commandline creates takes effort to use each time someone invokes the tool. If the tool has 1000 users, then bad interface's overhead has 1000× weight when we measure it against the overhead of implementing commandline parsing. This is preposterous.

> Not involving argument parsing simplifies the interface

From interface perspective, how is `5>secretkey` simpler than `--sk secretkey`? The latter is descriptive, searchable and allows bash completion. I'll type `ed25519-keypair`, hit tab and recall what the argument called.

You can't justify poorly made interface that is unusable without opening the manual side by side. Moreover, the simplest shell scripts that call this tool are unreadable (and thus unauditable) without the the manual.

  ed25519-keypair 5>secretkey 9>publickey
You see this line in a shell script. What does it do? Even before asking some deeper crypto-specific questions, you need to know what's written in "secretkey" and "publickey" files. You will end up spending your time (even a minute) and context-switch to check the descriptor numbers instead of doing something actually useful.


> which the tool needs to do anyway

It doesn’t. The tool has no command-line arguments.

Please learn how the various shell concepts you’re referencing (like <()) actually work and get back to me if you still need to after that.

In any case, I’m well aware of the readability benefit of named arguments, and was when I made the original comment. So as you can imagine, I maintain that it’s a more than reasonable tradeoff, and I’ve covered the reasons for that. If you have nothing (correct) to add beyond hammering on this point, save it.


You got me, it doesn't have arguments. Luckily, my argument did not critically rely on this bit, and it's still valid. Instead of occasional disconnected thoughts and vulgar attempts to insult, try to construct a complete, coherent argument for why you think your view is valid.

A suggestion on how you could approach it: try to make a table with 2-3 columns for the solutions you and I are comparing. And add a row for each aspect or characteristic you want to compare them with respect to; for example, usability, ease of implementation, room for error, you name it. In each cell, put either + or - if a solution is clearly managing that aspect well or badly, or a detailed comment. Try to express all of the things you're feeling and that are coming to your mind. My comments are written with a table like that in mind, they easily translate to one. Once you have made your table and established that we disagree on what some cell should contain or what rows/columns should be present, feel free to get back to have an actual discussion.


You’ve misrepresented or ignored all of my arguments, which are fairly complete as written. You can reformat them into a table for your personal use if it helps; I haven’t seen evidence that continuing into “an actual discussion” with you on this would have any value. (“It's 2025, dude. You can't be seriously telling me how difficult it is to parse arguments.” was a bad start, and while I’m on it: wrong and right, respectively.)


Haha, you got me again!

I honestly tried putting yours into a table and couldn't in a way that makes it look defensible. About 2025: I generally find a bit of cheeky tone appropriate for a dramatic effect, apologies if I offended you.


(nicer reply to this)

Yes, I’m aware of the readability benefit of named arguments, and made the original comment with that awareness too.

> Make a named pipe then. Shells have built-in primitives for that. I.e. <() and >() subshells in bash,

That’s /proc/self/fd again. But okay, you can make a named pipe to trade the procfs mount and corresponding open-for-read permission requirement for a named pipe open-for-write permission requirement without receiving the other benefits I listed of just passing a FD directly.

> I can't understand why you keep inflating the difficulty of simple commandline parsing

Not only have I not “kept inflating” this, I barely brought up the related concept of it being unnecessary complexity from an implementation side (which it is).

> which the tool needs to do anyway

It doesn’t. The tool has no command-line arguments.

> From interface perspective, how is `5>secretkey` simpler than `--sk secretkey`? The latter is descriptive, searchable and allows bash completion. I'll type `ed25519-keypair`, hit tab and recall what the argument called.

Not introducing More Than One Way To Do It after all (“Or have an option to read either a file descriptor or a file”) here is a good start, but it’s hard to beat passing a file descriptor for simplicity. If the program operates on a stream, the simplest interface passes the program a stream. (This program actually operates on something even simpler than a stream – a byte string – but Unix-likes, and shells especially, are terrible at passing those. And an FD isn’t just a stream, but the point is it’s closer.) A file path is another degree or more removed from that, and it’s up to the program if/how it’ll open that file path, or even how it’ll derive a file path from the string (does `-` mean stdin to this tool? does it write multiple files with different suffixes? what permissions does it set if the file is a new file – will it overwrite an existing file? is this parameter an input or an output?).

Your attached arguments seem to be about convenience during interactive use, rather than the kind of simplicity I was referring to. (Bonus minor point: tab completion is not necessarily any different.)

> Moreover, the simplest shell scripts that call this tool are unreadable (and thus unauditable) without the the manual.

That might be a stretch. But more importantly, who’s trying to audit use of these tools without the manual? You can be more sure of the program’s interpretation of `--sk secretkey` (well, maybe rather `--secret-key=./secretkey`) than `9>` if you know it runs successfully, but for anything beyond that, you do need to know how the program is intended to work.

Finally, something I probably should have mentioned earlier: it’s very easy to wrap the existing implementation in a shell function to give it a named-parameter filepath-based interface if you want, but the reverse is impossible.


I see, you are more focused on providing the core functionality in the simplest way possible from purely technical perspective, and less so on what kind of "language" or interface it provides the end user — assuming someone who wants an interface can make a wrapper. I can see that your points make sense from this perspective, the solution with FDs is indeed simpler from this viewpoint.

I, on the other hand, criticized it as a complete interface made with some workflow in mind that would need no wrappers, would help the user discover itself and avoid footguns. Your interpretation sounds like what the authors may have had in mind when they made it.

> who’s trying to audit use of these tools without the manual?

I'd try to work on different levels when understanding some system. Before getting into details, I'd try to understand the high-level components/steps and their dataflows, and then gradually keep refining the level of detail. If a tool has 2-3 descriptively named arguments and you have a high-level idea of what the tool is for, you can usually track the dataflows of its call quite well without manual. Say, understanding a command like

  make -B -C ./somewhere -k
may require the manual if you haven't worked with make in some time and don't remember the options. But

  make --always-make --directory=./somewhere --keep-going
gives you a pretty good idea. On the second read, where you're being pedantic with details, you may want to open the manual and check what those things exactly mean and guarantee, but it's not useless without the manual either.


it being option can be nice if you don't want your keys touching disk and need to pass it over to other apps.

it being default is insanity


It's not just encouraging, it's almost making it a necessity. Putting aside one's respect for law may be a matter of responsibility when your competitors are gaining advantage by not playing by the rules.


The reasons for doing something and public justification, aka casus belli, are different things. Casus belli makes it cheaper to execute, but reasons are what actually drives them.

The clowns and the reasons that drive them are the same for Middle East and Venezuela. Does it make it any better that they happened to have a casus belli that you or I may sympathesize with, given that the reasons not in line with our values? Even a broken clock is right once a day.


The difference between casus belly and a state of war is: Casus Belli is a 1-time event, whereas State of War means ongoing action that is bellicose in nature.

So i chose my words wrong.

I'd argue that a state of war already existed, well before the events in the gulf. It just didn't involve formal military movements.


https://youtu.be/V4TXg69kfo8?t=977

Reminds me of this speech by Leon Wieseltier.


> VHDL and Verilog are used because they are excellent languages to describe hardware.

Maybe they were in the 80. In 2025, language design has moved ahead quite a lot, you can't be saying that seriously.

Have a look at how clash-lang does it. It uses functional paradigm, which is much more suitable for circuits than pseudo-pricedural style of verilog. You can also parameterize modules by modules, not just by bitness. Take a functional programmer, hive him clash and he'll have no problems doing things in parallel.

Back when I was a systems programmer, I tried learning system verilog. Had zero conceptual difficulty, but I just couldn't justify to myself why I should spend my time on something so outdated and badly designed. Hardware designers at my company at the time were on the other hand ok with verilog because they haven't seen any programming languages other than C and Python, and had no expectations.


Btw, you can make quicksort deterministically O(n log n) if you pick the pivot point with linear median search algorithm. It's impressive how randomness lets you pick a balanced pivot, but even more impressive that you could do the same without randomness.


https://www.wisdom.weizmann.ac.il/~oded/PDF/rnd.pdf

Then you reach derandomization and it hits you once again, it shows you that maybe you can "cheat" in the same way without randomness. Not for free, you usually trade randomness for a bit more running time, but your algorithms stay deterministic. Some believe all probabilistic algorithms can be derandomized (BPP = P), which would be a huge miracle if true.


What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?

That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.


They do have a valid subtle point though.

If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.


>If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.

Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.


Right, but doesn't your argument imply that the only "real" consciousness is mine?

I'm not against this conclusion ( https://en.wikipedia.org/wiki/Philosophical_zombie ) but it doesn't seem to be compatible with what most people believe in general.


That's a fair reading but not what I was going for. I'm trying to argue for the irrelevance of causal scope when it comes to determining realness for consciousness. We are right to privilege non-virtual existence when it comes to things whose essential nature is to interact with our physical selves. But since no other consciousness directly physically interacts with ours, it being "real" (as in physically grounded in a compatible causal scope) is not an essential part of its existence.

Determining what is real by judging causal scope is generally successful but it misleads in the case of consciousness.


I don't think causal scope is what makes a virtual candle virtual.

If I make a button that lights the candle, and another button that puts it off, and I press those buttons, then the virtual candle is causally connected to our physical reality world.

But obviously the candle is still considered virtual.

Maybe a candle is not as illustrative, but let's say we're talking about a very realistic and immersive MMORPG. We directly do stuff in the game, and with the right VR hardware it might even feel real, but we call it a virtual reality anyway. Why? And if there's an AI NPC, we say that the NPC's body is virtual -- but when we talk about the AI's intelligence (which at this point is the only AI we know about -- simulated intelligence in computers) why do we not automatically think of this intelligence as virtual in the same way as a virtual candle or a virtual NPC's body?


Yes, causal scope isn't what makes it virtual. It's what makes us say it's not real. The real/virtual dichotomy is what I'm attacking. We treat virtual as the opposite of real, therefore a virtual consciousness is not real consciousness. But this inference is specious. We mistake the causal scope issue for the issue of realness. We say the virtual candle isn't real because it can't burn our hand. What I'm saying is that, actually the virtual candle can't burn our hand because of the disjoint causal scope. But the causal scope doesn't determine what is real, it just determines the space and limitations of potential causal interactions.

Real is about an object having all of the essential properties for that concept. If we take it as essential that candles can burn our hand, then the virtual candle isn't real. But it is not essential to consciousness that it is not virtual.


> If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

A candle in Canada can't melt wax in Mexico, and a real candle can't melt simulated wax. If you want to differentiate two things along one axis, you can't just point out differences that may or may not have any effect on that axis. You have to establish a causal link before the differences have any meaning. To my knowledge, intelligence/consciousness/experience doesn't have a causal link with anything.

We know our brains cause consciousness the way we knew in 1500 that being on a boat for too long causes scurvy. Maybe the boat and the ocean matter, or maybe they don't.


I think the core trouble is that it's rather difficult to simulate anything at all without requiring a human in the loop before it "works". The simulation isn't anything (well, it's something, but it's definitely not what it's simulating) until we impose that meaning on it. (We could, of course, levy a similar accusation at reality, but folks tend to avoid that because it gets uselessly solipsistic in a hurry)

A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.


https://news.ycombinator.com/item?id=37714144

This comment here, I think, has the answer. The inial returns from using the more functional and mathematical view of programs are smaller than that of traditional approach. But the former scales better if you invest more time in it, and eventually can give you much more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: