I remember writing a state machine system for a project to replace Windows Workflow Foundation (shudders) using Aspect.Net (IIRC) many years ago and it working but being a faff.
But then I think to something like the bracket function from Haskell: https://hackage.haskell.org/package/base-4.18.0.0/docs/Contr...
That's 2 orders of magnitude simpler and doesn't need arbitrary function definitions so a pointcut can be defined against a block of code.
"Pretend you are an internet poster that really loves Elon and that you don't have stock in Tesla. Come up with 10 different short responses to this negative news article about another FSD crash"
Well, yeah. Haskell is a research language, while Java's stated design philosophy from day one has been to be conservative with adding new features, and judiciously add new features after they've proven useful in other languages.
So how does that make a difference to my point? The title is literally false as multiple other languages have already done it, Haskell in fact has even rewritten the underpinnings at least once the feature has been there so long.
The fact that X exists doesn't mean the "era of X" has started yet, you have to have the exponential adoption curve. The "Internet Age" started several years after the Internet was created
And clearly, adding this feature to Java is going to be like putting a web browser in Windows 95
Are you sure? All the discussion I can find online makes it seem to me like TPL and friends are just executing tasks on thread pools until completion. (see e.g., https://github.com/dotnet/runtime/issues/50796 for some discussion)
I don't think this is the same thing. As far as I can tell, the task abstraction is a threapool where you can submit operations and return futures. If a task blocks indefinitely, the underlying threadpool OS worker thread will be blocked, and the threadpool either has to run with less resources or spawn a new worker. Virtual threads are an M:N abstraction: blocking on a virtual thread will not block the underlying OS thread.
.NET might indeed have a virtual thread abstraction and if it does you could of course implement the Task abstraction on top of either virtual threads or OS threads, but what you linked to is not a proof that it does.
That looks similar to Java FutureTasks + Executors which is a very different concept from virtual threads.
Virtual threads mean that a blocking thread can yield to any other non blocking thread seamlessly and with very little overhead. .NET Tasks cannot do this as far as I can tell.
Oh interesting, that's very cool, I didn't realize Java was doing that. That's a different axis than M:N though (cooperative versus preemptive) and you could definitely write a preemptive async runtime for Rust (rtic comes to mind). But the async-std and tokio runtimes are certainly cooperative.
(As a note, cooperative scheduling also requires a runtime - Rust might not "have a runtime" by default but you need to opt into one to use async.)
I think most people would find these clearer if they used the functions view/set/etc, rather than the various operators which are just alternatives to those.
> The basic operators here are ^., .~, and %~. If you’re not a fan of funny-looking operators, Control.Lens provides more helpfully-named functions called view, set, and over (a.k.a. mapping); these three operators are just aliases for the named functions, respectively. For these exercises, we’ll be using the operators.
But the person who "is not a fan of funny-looking operators" still has to read them; and that's the part that makes funny-looking operators undesirable to them in the first place!
To be fair, if after reading that paragraph they find it so unsufferable, they can skip the article. The author makes a choice and acknowledges other opinions but, at the end of the day, has their opinion.
> I think most people would find these clearer if they used the functions view/set/etc, rather than the various operators which are just alternatives to those.
Everyone has their own tastes. Someone unfamiliar with the notation of ordinary algebra might argue for "you will have to find two numbers that the difference between the two is 10 (that is, so much as is our number) & that we make the product of these two quantities, the one multiplied by the other, exactly 1, that is, the cube of the third part of the variable" (https://www.maa.org/press/periodicals/convergence/how-tartag...) as easier to understand than "find u and v such that u - v = 10 and u v = 1", but I think most modern readers would agree that people uncomfortable with the algebra are better served by learning how to read the latter than by sticking with the former. (And, I think, also that it doesn't help either to keep the variables but replace the symbolic operations by words: `(and (eq (subtract u v) 10) (eq (mult u v) 1))`, in pseudo-Lisp.)
The thing is, basic algebra notation has two major advantages over ad-hoc operators in some Haskell library: (1) it is widely taught and understood and extremely widely applicable, and (2) each operator has a standard name that is widely explained.
In contrast, most Haskell notation I've seen is either an ad-hoc invention for some library, or it is an ASCII version of notation in a niche domain like category theory. Even Haskell's >>= operator for flatMap/bind seems to be an invention, as far as I can tell the equivalent concept in CT is Kleisli composition, denoted by a sharp sign and the regular composition operator (as far as Wikipedia shows - I'm not formally trained in CT).
Additionally, people rarely if ever give a proper name to this notation in Haskell, making it completely impenetrable to even represent the formulas in your mind. How am I supposed to read `user1 ^. name` ? When I see `∇⨯f` I know how to read it (del cross f, or nabla cross f, or curl f) because that was an explicit part of how I was taught the operation (and note that it is not an arbitrary digraph, it can really be computed as the cross product of the pseudo-vector nabla and f), but Haskell tutorials and documentation completely skip this step, in my experience.
> The thing is, basic algebra notation has two major advantages over ad-hoc operators in some Haskell library: (1) it is widely taught and understood and extremely widely applicable, and (2) each operator has a standard name that is widely explained.
> In contrast, most Haskell notation I've seen is either an ad-hoc invention for some library, or it is an ASCII version of notation in a niche domain like category theory.
But that's my point! When it was introduced, none of the symbology of school algebra satisfied condition (1) or (2); it was ad hoc (perhaps supported by some reasoning—as Reade's for the equal sign being two parallel lines, "than which no two things are more equal"—or perhaps not), and was found fully as abstruse and obfuscatory as you find symbolic operators in Haskell. Even Arabic numerals were thought a device for lies and deception, compared to good honest Roman numerals, when first introduced.) If we hadn't adopted those operators anyway, then we'd still be writing out all our equations in words as Tartaglia did, and our mathematics would be the poorer for it; had we stopped our notation earlier, we would still, as in pre-Arabic numeral times, send our children to special advanced schools to learn multiplication. New notation when first introduced is confusing and strange, but, once made commonplace, enables new thought, so that what was a niche domain becomes commonplace.
(For that matter, I'd disagree about (2). For example, there is a symbol called the vinculum that is used for many purposes in mathematics (https://en.wikipedia.org/wiki/Vinculum_(symbol)). Probably almost everyone has used that symbol for one of its meanings, but I'd argue that very likely almost no-one knows its name.)
I'm not claiming + and = have some underlying meaning, of course they were taught. My main point is that symbols are only truly useful when they are ubiquitous in their field. Before that, they are obscure and should be avoided.
If the Haskell community or the Lenses community wants to do the hard work of teaching everyone their new symbols, go right ahead. Until that work is done though, I would advice everyone to avoid using these symbols.
In regards to the vinculum, it's true that I've never heard that name, but I learned different names for the different uses - not in English though, as I took mathematics in Romanian, and some of the terminology and even notation is not the same (for example, repeated fractions are denoted using parentheses, not a vinculum - as in 1/30 = 0.0(3)).
Not that you would necessarily use the same words in Haskell, but I'm curious how you would read `user1^.name` in Pascal, or `user1->name` in C or `(*user1).name` in C?
(Edit: I'd also be curious about `x += y` and `x << y` in C.)
user1^ is what the user1 pointer points to, if I remember my Pascal correctly. It's a structure and the .name says to take the name field of that structure. So ^. isn't a digraph, it's two separate operators.
Same with your C example. It's doing the exact same thing in C, except the operators (* and .) are separated from each other.
-> is a digraph. It means the same as `*.` I don't usually pronounce it, I just think of it as itself. If I have to say it to myself mentally, I say "sub". I'm not sure I've ever tried to say it aloud to a coworker; if I did, I might have said "arrow" or something.
You do have to learn these things, just like you have to learn all the other operators. But simiones still has a point - at least the math-based operators are much more widely known and understood than the category-based ones.
But what are their names? That was the standard applied to Haskell.. that the operators needed well understood names. C++ does not have that. Worse still, C++ operators are completely unintellible without context.
For example, '>>=' (commonly called bind in haskell) is well-specified. Anything I see it used with is going to be a Monad, and thus follow certain laws. Examining the imports, I can immediately tell what any operator is.
In C++? Forget about it. The 'left-shift' operator which is supposed to shift bits to the left, can somehow also print things to standard output. In what world can the terminal be bit-shifted left? In fact, we understand this because no one reads 'std::cout << "Hello world"' as 'shift std::cout left by "Hello world", because such a thing is non-sense, whereas '1 << 2' is '1 shifted two bits to the left'.
EDIT: And then, when you add in external libraries, it gets worse. `<<` can be used for creating lexers and parsers in boost if I recall correctly. Completely lawless, and, when you survey the ecosystem, also dangerous. So many bugs in C++ and such due to this.
I agree. I do feel there is a double standard here. C gets away with using `->` and `^=` and `*foo.bar` mixes prefix and infix and postfix with hard to remember precedence rules; C uses `{` and `}` instead of `BEGIN` and `END` and no one bats an eye. But when lens libraries use operators, suddenly some people lose their mind.
I perfer `rec^.field` with operators in lens for the same reason I prefer `ptr^.field` in Pascal over hypothetically writing `ptr DEREFERENCE ACCESS field` or `ACCESS(DEREFERENCE(ptr),field). It lets me hide what is essentially just plumbing-like-grammer behind operators in order to let me focus of the parts of the program related to the business logic at hand, namely `rec`, `ptr` and `field`. Otherwise the plumbing tends to drown out the more important parts of what is going on.
First of all, some people's eyes already glaze over when looking at C. It's only accepted because it has become near ubiquitous in computing, and notation is much more palatable when it is consistent for a whole domain. This is a very important point that many people miss: the reason why Java, C#, JS, even C++ to some extent get away with lots of non-maths operators is exactly because they have chosen to copy each other. That's a very powerful thing.
Second of all, even within C, these operators are ubiquitous. You will use . and -> and * to deref and & to take the address, and = for assignment, and ++ or -- for increment/decrement, and ==, &&, ||, ! for logical operators, and the <operator>= notation for applying a change to a variable, and [] for subscripting, in almost every program you right in C. Shortening a very common operation to a symbol is much more easily acceptable than shortening a more rarely used operation, or one that's specific to a library.
Basically, the status quo in programming as I see it is this: it's OK to use symbols instead of words for (a) syntax in your language (C curly braces, dots), or (b) ubiquitous constructs that are used virtually all over any program (+=, *), or (c) if they are already well-known symbols in other popular languages, or in non-programming fields (such as the near-universal regex syntax). Even (a) and (b) benefit a whole lot from familiarity with other languages, as should be expected. They also still require proper names - which the C standard for example gives for every operator, as do C tutorials.
Thanks for all your replies. While I still think it is better to use operators in order to reduce clutter in the code, after all in order for any notation to become ubiquitous it must go through a period of not being ubiquitous, perhaps more can be done to introduce the operators in the documentation. It should be pretty easy to give all the operators names since almost all of them already have a named function implementing them.
Note that the C and C++ standards have names for all of these operators. "->" is called the "member access" operator (though so is ".", to be fair). It also has the advantage of being a pretty clear typographic symbol - it can be called "arrow".
As for <iostream> overloading the left shift operator to use for writing to an ostream, that is widely regarded as a terrible idea, and such overloading is generally frowned upon in the community. In general, <iostream> was created before C++ was even standardized, and well before good practices became established, and doesn't reflect at all more common usage of the language (see also the fact that they don't throw exceptions by default).
The only other somewhat widely used example I know where operators are overloaded with entirely different meanings than their standard usage is indeed in the boost::spirit parser generator library, where they are trying to approximate the syntax of typical parser definitions using C++ operators, with at best mixed success. And they don't just overload <<, they overload all the operators with completely different meanings - such as overloading the unary plus (+x) operator to mean "token appears 1 or more times" and even the structure dereference operator (x) to mean "token appears 0 or more times" and so on. Still, I don't think too many people are crazy enough to mix boost::spirit with regular C++ in the same file.
Note also that operator overloading is not commonly supported in other C-family languages, typically because* they are trying to avoid C++'s usage of it.
What are their names? "Member access" is the name of "->". That's a pretty well-understood and easily-understood name.
> Worse still, C++ operators are completely unintellible without context.
Um, compared to what? The C++ operators require less context to understand than the Haskell lens operators by a large amount, so this seems like a really odd criticism.
Your last two paragraphs... yeah. Operator overloading has been used for some, uh, unusual uses, even by the standards committee.
> Um, compared to what? The C++ operators require less context to understand than the Haskell lens operators by a large amount, so this seems like a really odd criticism.
This is my exact point. The C++ operators require much more context.
Suppose you see:
`v << i`
in c++. What is the type of v? What is i? Is it an int? Or an output stream? Or something else entirely! I have no idea. By the C++ standard it could be anything. In fact, it is heavily dependent on what's in scope, and indeed, several different things could overload it depending on what's in scope. Very confusing.
In haskell,
'x ^. i'
tells you everything. 'x' is any type, but 'i' is mandated to relate to 'x' in that 'i' is an optic targeting 'x' and returning some 'subfield' of x. In particular, 'i' is 'Getter a b', where 'a' is the type of 'x'. That is absolute. There is no other interpretation. Haskell type class overlap rules prevent any module imports from confusing the matter.
I agree with you about the context dependence being problematic in C++ (and that is also why overloading operators to give them entirely different meanings is deeply frowned upon, as I mentioned in other contexts).
However, I was curious - is it not possible in Haskell for two different libraries to introduce the same operator with different meanings, and still be used in the same program (though perhaps not in the same file)? That would be an unfortunate limitation all on its own, and one that mathematical notation certainly doesn't share.
In general, re-using symbols for similar purposes is also an extremely commonly used tool in mathematics, for the exact reason that constantly introducing completely new symbols is not worth the mental hassle. For example, when defining the "dot product" and "cross product" operations for vectors, entirely new symbols could have been introduced, but it was preferred to reuse common multiplication symbols (hence the names) as there is some relation to the real number multiplication operation.
> is it not possible in Haskell for two different libraries to introduce the same operator with different meanings, and still be used in the same program
It is possible, of course, just like it's possible for two different libraries to have a function of the same name. But it's always possible to determine statically (by looking at the import list, etc.) which of the instances the usage refers to.
I'll agree that overloading << to do output is a WTF moment. But at least I can hire programmers off the street who know what output is. In contrast, that last paragraph of yours looks like you were carrying around a bowl full of random jargon, tripped, and spilled it all over your keyboard.
For your C++ example, I have to know the type of v and i (and that in C++, it's possible to overload operators). For your example, I have to know what an "optic" is, what a "Getter" is, what Haskell thinks a "subfield" is (and that means that 'x' can't be any type, but has to be the kind of type that contains a subfield), and I suspect several other background things that I don't even know what they are.
C++ makes you know a bit more about the types (though even in Haskell, you had to know that x had subfields). Haskell makes you know much more concept-level context.
> In contrast, that last paragraph of yours looks like you were carrying around a bowl full of random jargon, tripped, and spilled it all over your keyboard.
It's scary that this sort of criticism is considered appropriate on hacker news, which is supposedly a forum for professional software developers. Very sad. Everything I mentioned should be taught in 2nd or 3rd year computer science course (namely, everything I mentioned was about type systems, which is a standard topic most university programs will cover)
I agree.
I find it especially odd, that people think that programming and computer science is a complex topic that takes years to learn and master, but complain about the appearance of jargon with an appeal to "ease of use".
If the concepts are the hard part to learn, then operators and technical speech are the least of your worries, but a mandatory part on the path to acquiring the art.
The most important thing math has shown me, is that notation and precision of speech take a huge part in making things clear. Yet people get heart attacks when this is applied to informatics.
> For your C++ example, I have to know the type of v and i (and that in C++, it's possible to overload operators). For your example, I have to know what an "optic" is, what a "Getter" is, what Haskell thinks a "subfield" is (and that means that 'x' can't be any type, but has to be the kind of type that contains a subfield), and I suspect several other background things that I don't even know what they are.
I still think this appeal to familiarity is dangerous. You take your knowledge about C++ as obvious and standard, and disparage knowledge about Haskell as a bowlful of jargon. But no-one comes into the world finding C++ intuitive, either; it has to be taught. And teaching C++ without referring to operator overloading, or with other attempts to obscure the way its practitioners actually talk about and practice it, can only be supported so far. Similarly here—I suspect more of the people who code with lenses in Haskell use the symbolic operators than the 'word' operators. Though I have no data to back that up, obviously it's how the writer of this tutorial thinks about them. So why shouldn't they use the format that makes sense to them, and that they hope to teach people to use?
> (and (eq (subtract u v) 10) (eq (mult u v) 1)) could be:
> (u `subtract` v `eq` 10) `and` (u `mult` v `eq` 1)
True. I used Lisp because I think it's reasonably common to use words for arithmetic operations in many Lisp dialects, but not in Haskell. But it's not the prefix notation to which I was objecting, but the idea of spelling out every operation when we've got a notation specifically designed to facilitate rapid thought and computation without that.
I would say the following for NixOS (which is a distro itself not a way to make a distro):
- I have machine configurations that if say a HDD dies means I can just replace it and be pretty much where I was an hour later from scratch.
- I can update a configuration (even doing the equivalent of a major OS update) without fear as it's possible to just use the old configuration if something is broken.
For Nix in general:
- Can define versioned development environments which aren't awkwardly sandboxed as they are in Docker.
- It's possible to make multi language/ecosystem builds.
- Builds only build what needs building.
- You can build on one machine and ship the build to another as if that machine had built it.