The usual workarounds are a stateful API (e.g. Cairo, OpenGL, or Windows GDI), passing a structure explicitly (oodles of examples in Win32, e.g. RegisterClass or GetOpenFileName), or a twiddling an object that’s actually just a structure dressed up in accessor methods (IOpenFileDialog).
There could be reasons to use one of those still (e.g. extensibility while keeping a compatible ABI, as in setsockopt, pthread_attr_*, or arguably posix_spawnattr_*). But sometimes you really do need a finite, well-known but just plain large number of parameters that mostly have reasonable defaults. Old-style 2D APIs to draw and/or stroke a shape (or even just a rectangle) are the classic example. Plotting libraries (in all languages) are also prone to this. It does seem like these situations are mostly endemic to specific application areas—graphics of all kinds first of all—but that doesn’t make them not exist.
If you don’t want to use function-like macros for anything ever even if this particular one works, that’s a valid position. But it does work, it does solve a real problem, and it is less awkward at the use site than the alternatives.
With large numbers of parameters, it's almost always more readable to use a config struct. Especially since often, you want to collect configuration from multiple sources, and incrementally initializing a struct that way is helpful.
There are a lot of syntactic sugar improvements the committee could make that they simply refuse to. Named parameters and function pointer syntax are compile-time fixes that would have zero runtime costs, yet it's 2024 and we've hardly budged from ANSI C.
Exactly. I actually think default parameters are hazardous without named-parameter support. When they added one, IMO they should have added the other as well, so that you can specify exactly which non-default parameters you're passing.
I think this is more an appeasement of the C++ committee because they don't like the order of evaluation to be ambiguous when constructors with side effects come into play. Witness how they completely gimped the primary utility of designated initializers with the requirement to have the fields in order.
The problem with doing it in rust is that most calls aren't tail calls, even if they look like tail calls. You need to invoke the destructors for any code path that can drop.
Isn't that the purpose of `become`? I thought it was to say "this IS a tail call, error out if it is not". After that validation is done, then the compiler can drop as needed.
The compiler can't drop as needed, because the drop prevents things from being tail calls. A single drop in a function prevents any calls from being tail calls, and therefore, they can't be eliminated.
In idiomatic rust, this means very few functions can use become.
What makes you think that the decision making will be hard to automate? In a couple of decades, keeping this pace of development, doesn't it seem plausible that AI will be able to extract requirements from observed behavior, and decide how to fill them better than a human can?
Because good decision making requires lots of external context and interpersonal assumptions. These things are not even documented. It's like every outsourcing project for code that I've seen: yes, what was delivered matched the written requirements, but it's useless and needs lots of changes. There's still going to be space for people who understand how things work both in code and in practice. (Cue Office Space "I deal with customers so engineers don't have to. I've got people skills!")
There are also right now cases of: I can write a paragraph of what change I need, or change those two lines myself. I don't expect they will go away entirely and having an agent available will be closer to an extra skill than a full replacement.
It seems like observing usage and setting up a feedback loop, then incrementally a/b testing, is certainly not something that can be done today, but doesn't seem beyond the realm of possibility a couple decades from now.
Edit: In fact, now that I think about it, I'd probably be more ok dealing with a requirements document and testing plan generated by chatgpt today from a summary than I would be dealing with code -- hallucinations tend to have less impact at that stage. Requirements tend to be wrong anyways until they're implemented; code is where the rubber hits the road. The missing piece is having LLMs get better at integrating and reasoning about real world data, so that it can iterate on user testing.
Heck, at that point, models can decide what to build, and not just how to build it; there's a treasure trove of data that they can access much faster than humans to find gaps in the market and generate solutions to fill those gaps.
Assuming we don't hit a wall with model capabilities, we can focus purely on consuming the outputs of AI, as it optimizes itself to best keep us happy.
Unfortunately, servers are usually configured without X forwarding enabled. And the functionality I am describing already exists for terminal-based programs, it's been reimplemented multiple times — see e.g. [0][1], it's just implemented with horrible hackery (by manually driving terminal, processing raw input and counting how much lines of text the terminal screen is probably displaying right now).
I just want to, e.g. write a simple Python program that has
for line in streaming_response.lines():
print(line)
in one thread, and
while True:
cmd = input('> ').strip()
if cmd == 'q':
break
if cmd == 'stop':
requests.post(...)
...
in another, and be able to input my commands without the echo of my input being teared up by the output. Erlang's shell can do that. Readline can be used to do that, but Python's bindings don't export the needed functions. Swapping out the sys.stdout/sys.stdin with my custom interceptors to do this manually... barely works, slow, ugly as hell and complicated.
There's a far worse problem: that you effectively need to use it from preprocessor macros, which effectively prevents it from being used from being used in sane interfaces.
The "interface" would directly expose the specialized and uniquely named functions. _Generic is just an inline helper to automatically select the right specialized function by a type. If C ABI compatibility is required, that's about the only way to do it (for better or worse).
(FWIW I haven't found a compelling reason yet to use _Generic in my own C code)
reply