In Haskell, that's usually that's done using `do` syntax.
do
a <- somePartialResult
b <- partialFunction1 a
c <- partialFunction2 a b
return c
where we assume signatures like
somePartialResult : Either<A, Error>
partialFunction1 : A -> Either<B, Error>
partialFunction2 : A -> B -> Either<C, Error>
this overall computation has a signature Either<C, Error>. The way it works is that the first failing computation (the first Either that's actually Left-y) will short-circuit and become the final result value. Only if all of the partial computations succeed (are Right-y) will the final result by Right(c).
In Haskell we don't have an early return syntax like `return` and function scope. Instead, we construct something equivalent using `do` syntax. This can be a little weightier than `return`, but the upside is that you can construct other variants of things like early returns that can be more flexible.
Unfortunately, no. Or, rather, I'm sure there's a way to make it happen although that's not typical practice. Typically you'd resort to mapping the left sides of your eithers so that the error types match.
Rust offers a similar facility (though specialized to just handle a couple kinds of error handling) using its `?` syntax. This works essentially identically to the do syntax above, but also includes a call to transform whatever error type is provided into the error type of the function return.
Note that in Rust (a) this technique only, today, works at function boundaries and (b) will always be explicitly annotated since all functions require an explicit type. This helps a bit over Haskell's more general approach as it provides some additional data to help type inference along.
That said, if you were interested, it's likely possible to emulate something very similar to Rust's technique in Haskell, too.
But I don't think I've ever seen that. It just doesn't feel as stylish in Haskell. The From/Into traits define a behavior that's much more pervasive than most type classes in Haskell. It works well for Rust, but is I think less compelling to the Haskell community.
Without judgement, this feels like a switch up. It seems to me that the prior author did not suggest they were necessary, but instead ambient, available, and interesting. Indeed for many they are not useful.
At the same time, that might be a good indication of passion: a useless but foundational thing you learn despite having zero economic pressure to do so.
In the domains you have worked on, what are examples of such things?
It is not a switch up. The author said that an engineer would find these during their journey, and I am asking when. I don't have a strongly held opinion here, I literally want to know when. I am curious about it.
Personally, I think it’s possible to not encounter them. I surely avoided them for a while in my own career, finding them to be the tricks of a low-level optimization expert of which I felt little draw.
But then I started investigating type systems and proof languages and discovered them through Boolean algebras. I didn’t work in that space, it was just interesting. I later learned more practically about bits through parsing binary message fields and wonder what the deal with endianness was.
I also recall that yarn that goes around from time to time about Carmack’s fast inverse square root algorithm. Totally useless and yet I recall the first time I read about it how fun a puzzle it was to work out the details.
I’ve encountered them many times since then despite almost never actually using them. Most recently I was investigating WASM and ran across Linear Memory and once again had an opportunity to think about the specifics of memory layout and how bytes and bits are the language of that domain.
I've mostly done corporate Java stuff, so a lot of it didn't come up, as Spring was already doing it for me.
I've first got to enjoy low level programming when reading Fabien Sanglard breaking down Carmack's code. Will look into WASM, sounds like it could be a fun read too.
I found them in 2015 when I was maintaining a legacy app for a university.
The developer that implemented them could have used a few bools but decided to cram it all into one byte using bitwise operators because they were trying to seem smart/clever.
This was a web app, not a microcontroller or some other tightly constrained environment.
One should not have to worry about solar flares! Heh.
Maybe. Every time I write something I consider clever, I often regret it later.
But young people in particular tend to write code using things they don’t need because they want experience in those things. (Just look at people using kubernetes long before there is any need as an example). Where and when this is done can be good or bad, it depends.
Even in a web app, you might want to pack bools into bytes depending on what you are doing. For example, I’ve done stuff with deck.gl and moving massive amounts of data between webworkers and the size of data is material.
It did take a beat to consider an example though, so I do appreciate your point.
Coming from a double major including EE though, all I have to say is that everyone’s code everywhere is just a bunch of NOR gates. Now if you want to increase your salary, looking up why I say “everything is NOR” won’t be useful. But if you are curious, it is interesting to see how one Boolean operand can implement everything.
I can understand writing your own instance and deployment "orchestrator", but I would not consider trying k8s for fun, because it just seems like some arbitrary config you have to learn for something someone else has built.
Nobody that uses bit flags do it because they think it makes them look clever. If anything they believe it looks like using the most basic of tools.
> One should not have to worry about solar flares!
Do you legitimately believe that a bool is immune to this? Yeah, I get this a joke, but it's one told from a conceit.
This whole post comes off as condescending to cover up a failure to understand something basic.
I get it, someone called you out on it at some point in your life and you have decided to strike back, but come on... do you really think you benefit from this display?
I made a concerted effort to understand the code before I made any effort to adapt it to the repo I was working on. I'm glad I did (although honestly, it wasn't in the remotest bit necessary to solve the task at hand!)
> someone who has "curiosity, passion, focus, creative problem solving" regarding programming
would find this on their journey. Whereas you are describing someone who merely views programming as a job.
It's perfectly fine to view programming as "just work" but that's not passion. All of the truly great engineers I've worked with are obsessed with programming and comp sci topics in general. They study things in the evening for fun.
It's clear that you don't, and again, that's fine, but that's not what we're talking about.
Software for you is a job. I work in software because it's an excuse to get paid to do what I love doing. In the shadow of the dotcom bust most engineers where like this, and, they were more technical and show more expertise than most software engineers do today.
It might be an indication for passion, but not knowing them does not indicate lack of it.
The thing is there are unfathomable amount of such things to learn, and if somebody doesn't stumble upon them or won't spend time with them it doesn't indicate lack of it.
I'm sorry but bitwise operators are the most fundamental aspect of how computing works. If you don't understand these things it does indicate a lack of passion, at least in regards to programming and computation.
Bitwise operators are not a particularly complex topic and extend from the basics of how logic is implement in a computer system.
Personally I think ever one interested in programming should have, at least once, implemented a compiler (or at least interpreter). But a compiler is a substantial amount of work, so I understand that not everyone can have this opportunity. But understanding bitwise operators is requires a minimal investment of time and is essentially to really understanding the basics of computing.
Guess I have a lack of passion then, despite building countless of full stack side projects and what-not even before I knew these could make me money at my teenage years. I started out with PHP using txt files as storage in my teens so none of what you said would be relevant to what I've experienced, and that was I would say almost 20 years ago. And in my high school years I took part of coding competitions where I scored high nationally without knowing any of bitwise or otherwise things, despite deeply enjoying those things.
This weird elitism drives me mad, but maybe truly I haven't done enough to prove my worthiness in this arena. Maybe PHP or JavaScript is not true programming.
This is how I describe Buc-ee's to those who aren't familiar - Imagine a convenience store that's the size of a football field. Inside is freshly made, but not healthy at all food and the cleanest bathrooms you will ever see at a gas station. Next to the store, imagine another football field of gas pumps. Finally, all around the pumps and store is enough paved land to fill yet another football field.
You can spend 30 minutes in there pretty easy. I would say you could probably spend an hour in there if you tried a little bit. Lots of hot fresh food selections (Best known for their Brisket Sandwiches), lots of gas station food like chips, jerkey etc. Cold Drinks. tons of general merchandise things like T-shirts, Christmas decorations etc. They also have large, clean, well maintained bathrooms.
I've kinda figured that the basic business model is they offer just about everything you need and want if you are on a long car trip. Food, Drink, Gas, Clean Restrooms, a chance to stretch your legs for a little bit. And while you are stretching your legs they sell you a Bucee's Tshirt and tumbler.
You could think of it as a proof saying which polynomials are solvable by which algorithms. Solvable by radicals is one class of simpler algorithm and it so happens we have a cute proof as to when it will work or fail.
In short, tensors generalize matrices. While we can probably guess accurately what "4x4 matrix" means, "4x4 tensor" is missing some information to really nail down what it means.
Interestingly, that extra information helps us to differentiate between the same matrix being used in different "roles". For instance, if you have a 4x4 matrix A, you might think of it like a linear transformation. Given x: V = R^4 and y = Ax, then y is another vector in V. Alternatively, you might think of it like a quadratic form. Given two vectors x, y: V, the value xAy is a real number.
In linear algebra, we like to represent both of those operations as a matrix. On the other hand, those are different tensors. The first would be a rank-(1,1) tensor, the second a rank-(2, 0) tensor.
Ultimately, we might write down both of those tensors with the same 4x4 array of 16 numbers that we use to represent 4x4 matrices, but in the sort of math where all these subtle differences start to really matter there are additional rules constraining how rank-(1, 1) tensors are distinct from rank-(2, 0) tensors.
Not really to push back as I do agree that this is a bit trickier to get an intuition for than the OP suggests, but the most trivial concrete example of a (1, 1) tensor would just be the evaluation function (v, f) |-> f(v), which, given a metric, corresponds to the inner product.
I made a comment elsewhere on this thread that explains that symbols themselves being compact isn't so important, but that the set of descriptions of the symbols must be compact. For example, if the description of the symbol is not the symbol itself as a set, but a map f:[0,1]^2 -> [0,1] that describes the "intensity" of ink at each point, then the natural conclusion is that the description of a symbol must be upper semicontinuous, which makes the set of descriptions compact.
I don't think that's quite true. The lift here is that the state machine does not do any IO on its own. It always delegates that work to the event loop that's hosting it, which allows it to be interpreted in different contexts. That makes it more testable and more composable as it makes fewer assumptions about the runtime environment.
Theoretically, you could do the same thing with async/await constructing the state machines for you, although in practice it's pretty painful and most async/await code is impure.
There are lots of more experimental languages which exceptional support for this style of programming (Eff, Koka, Frank). Underlying all of Haskell's IO discourse is a very deep investment into several breeds of this kind of technology (free monads and their variants).
Lately, Unison has been a really interesting language which explores lots of new concepts but also has at its core an extensible effects system that provides excellent language-level support for this kind of coding.
> I don't think that's quite true. The lift here is that the state machine does not do any IO on its own.
Here is a simple counter example. Suppose you have to process a packet that contains many sequences (strings/binary blobs) prefixed by 4 bytes of length.
You are not always guaranteed to get the length bytes or the string all in one go. In a sequential system you'd accumulate the string as follows
handle_input(...)
while not received 4 bytes
accumulate in buf
len = toInt(buf[0..4])
while not received len bytes
accumulate in buf
If implemented as a state machine ,these would require two await points to assemble the string. Flattening this out into a state machine manually is a pain.
I'm not sure what part of that is supposed to be a pain. The sans-io equivalent would be:
handle_input(buf) -> Result {
if len(buf) < 4 { return Error::IncompletePacket }
len = toInt(buf[0..4])
if len(buf) < 4 + len { return Error::IncompletePacket }
packet = buf[4..(4 + len)]
return Ok { packet: packet, consumed: 4 + len }
}
where the semantics of `Error::IncompletePacket` are that the caller reads more into the buffer from its actual IO layer and then calls handle_input() again. So your "while not received required bytes: accumulate in buf" simply become "if len < required: return Error::IncompletePacket"
I don't think that implementation is particularly good, although this is a big trick with Sans-IO: is the event loop responsible for buffering the input bytes? Or are the state machines?
In effect, you have to be thoughtful (and explicit!) about the event loop semantics demanded by each state machine and, as the event loop implementer, you have to satisfy all of those semantics faithfully.
A few alternatives include your version, one where `handle_input` returns something like `Result<Option<Packet>>` covering both error cases and successful partial consumption cases, one where `handle_input` tells the event loop how much additional input it knows it needs whenever it finishes parsing a length field and requires that the event loop not call it again until it can hand it exactly that many bytes.
This can all be pretty non-trivial. And then you'd want to compose state machines with different anticipated semantics. It's not obvious how to do this well.
Fair enough. So let's complicate it a little. If you have hierarchical variable sized structures within structures (e.g. java class file), then you need a stack of work in progress (pointers plus length) at every level. In fact, the moment you need a stack to simulate what would otherwise have been a series of function calls, it becomes a pain.
Or let's say you have a loop ("retry three times before giving up"), then you have to store the index in a recoverable struct. Put this inside a nested loop, and you know what I mean.
I have run into these situations enough that a flat state machine becomes a pain to deal with.
These are nicely solved using coroutines. That way you can have function related temporary state, IO-related state and stacks all taken care of simply.
I agree totally, it wasn't my intention to say that there aren't protocols which require non-trivial state machines to implement their behavior.
To be more clear, I'm contesting that the only thing being discussed in the article is this convenience around writing state machines. I think whether or not you have to write non-trivial state machines by hand or have them generated by some convenient syntax is orthogonal to the bigger insight of what Sans-IO is going after.
I think the most important part here is that you write these state machines such that they perform no impure calculation on their own. In other words, you write state machines that must be driven by an event loop which is responsible for interpreting commands from those state machines and that all IO (and more generally, all impure calculation) is performed exclusively by that event loop.
It's much more possible to compose machines like this because they don't make as many assumptions on the runtime. It's not that they're reading from a blocking or non-blocking socket. It's that they process some chunk of bytes and _possibly_ want to send some chunk of bytes back. The event loop, constructed by the user of the state machine, is responsible for deciding how to read/write those bytes.
In Haskell we don't have an early return syntax like `return` and function scope. Instead, we construct something equivalent using `do` syntax. This can be a little weightier than `return`, but the upside is that you can construct other variants of things like early returns that can be more flexible.
reply